[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432212#comment-15432212
 ] 

Hadoop QA commented on HADOOP-13448:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
33s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
46s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824993/HADOOP-13448-HADOOP-13345.001.patch
 |
| JIRA Issue | HADOOP-13448 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7624106b85a1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 763f049 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10341/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10341/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13448-HADOOP-13345.001.patch
>

[jira] [Updated] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-08-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13448:
---
Status: Patch Available  (was: Open)

> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13448-HADOOP-13345.001.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-08-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13448:
---
Attachment: HADOOP-13448-HADOOP-13345.001.patch

I'm attaching a patch to get the ball rolling.  I approached this by looking at 
both prototype patches attached to HADOOP-13345 and trying to distill the 
required parts of each one into a single interface.  Here is a summary:

* I named it {{MetadataStore}}, as in Aaron's prototype, not 
{{ConsistentStore}} as in mine.  The former is more accurate, because our 
current design allows for S3Guard to take responsibility for a variety of 
concerns, such as caching, instead of just consistency.
* I declared an {{initialize}} method and {{implements Closeable}}, as in my 
prototype.  I prefer that {{MetadataStore}} implementations take responsibility 
for their init/shutdown lifecycle.
* I declared {{delete}} and {{deleteSubtree}} as separate methods as in my 
prototype rather than a single method with a recursive {{boolean}} argument as 
in Aaron's prototype.  I have a slight preference for the explicit breakdown of 
distinct methods.  If people strongly prefer the {{boolean}} argument for 
closer symmetry with {{FileSystem}}, then I would be willing to compromise.
* I used the name {{put}} (Aaron's prototype) instead of {{save}} (my 
prototype).  I have not included a separate {{putNew}}, because I'd like to 
explore the possibility that the passed metadata object is sufficient to 
describe the {{put}} vs. {{putNew}} use case from Aaron's prototype patch.
* {{PathMetadata}} is mostly a copy from my prototype in this revision.  I 
expect before we're done we'll need to add the {{isFullyCached}} flag to 
{{DirectoryPathMetadata}} or introduce a new sub-type for 
{{FullyCachedDirectoryPathMetadata}}.  I also expect we'll need to add a 
sub-type for tombstones to track explicitly that a path was deleted.

This is very much open to suggestions from others, so please let me know your 
feedback.  This is also not planned to be a public interface, at least not in 
the short-term, so we'll have freedom to evolve it to meet our requirements.

Pre-commit will warn that there are no tests.  Since this is (mostly) inteface 
definition, I don't plan to write tests immediately.  Tests written for 
subsequent S3Guard sub-tasks would cover this.


> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13448-HADOOP-13345.001.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13529) Do some code refactoring

2016-08-22 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13529:
---
Description: 
1. add OSSFileStatus, FlleStatus -> OSSFileStatus
2. argument and variant naming
3. utility class

  was:
1. add OSSFileStatus, FlleStatus -> OSSFileStatus
2. argument and variant naming


> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
>
> 1. add OSSFileStatus, FlleStatus -> OSSFileStatus
> 2. argument and variant naming
> 3. utility class



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13529) Do some code refactoring

2016-08-22 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13529:
---
Description: 
1. argument and variant naming
2. utility class

  was:
1. add OSSFileStatus, FlleStatus -> OSSFileStatus
2. argument and variant naming
3. utility class


> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
>
> 1. argument and variant naming
> 2. utility class



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS for the authentication failure of proxy user

2016-08-22 Thread Suraj Acharya (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432067#comment-15432067
 ] 

Suraj Acharya commented on HADOOP-13526:


[~xiaochen] and [~anu] thanks :)

> Add detailed logging in KMS for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13526.patch, HADOOP-13526.patch.1, 
> HADOOP-13526.patch.2, HADOOP-13526.patch.3
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13375) o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky

2016-08-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432026#comment-15432026
 ] 

Weiwei Yang edited comment on HADOOP-13375 at 8/23/16 2:49 AM:
---

This issue happens to me as well when I was working on some other issues. It 
fails quite often as the test case uses rarely small sleep. To get rid of this, 
I had a patch that to replace those sleep-then-check logic by wait-on-value 
check, it checks if the counter reaches the expected value in a given timeout, 
use small interval to reduce the await time. It runs faster than before and 
never fail again in my test. 

I would love to upload this patch so you guys can help to review and see if it 
makes sense, but the I don't know why JIRA doesn't allow me to (I fixed several 
HADOOP issues before).


was (Author: cheersyang):
This issue happens to me as well when I was working on some other issues. It 
fails quite often as the test case uses rarely small sleep. To get rid of this 
issue, I had a patch that to remove those sleep-then-check logic by 
wait-on-value check, it checks if the counter reaches the expected value in a 
given timeout, use small interval to reduce the await time. It runs faster than 
before and never fail again in my test. 

I would love to upload this patch so you guys can help to review and see if it 
makes sense, but the I don't know why JIRA doesn't allow me to (I fixed several 
HADOOP issues before).

> o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky
> --
>
> Key: HADOOP-13375
> URL: https://issues.apache.org/jira/browse/HADOOP-13375
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>
> h5. Error Message
> bq. expected:<1> but was:<0>
> h5. Stacktrace
> {quote}
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.security.TestGroupsCaching.testBackgroundRefreshCounters(TestGroupsCaching.java:638)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13375) o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky

2016-08-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432026#comment-15432026
 ] 

Weiwei Yang commented on HADOOP-13375:
--

This issue happens to me as well when I was working on some other issues. It 
fails quite often as the test case uses rarely small sleep. To get rid of this 
issue, I had a patch that to remove those sleep-then-check logic by 
wait-on-value check, it checks if the counter reaches the expected value in a 
given timeout, use small interval to reduce the await time. It runs faster than 
before and never fail again in my test. 

I would love to upload this patch so you guys can help to review and see if it 
makes sense, but the I don't know why JIRA doesn't allow me to (I fixed several 
HADOOP issues before).

> o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky
> --
>
> Key: HADOOP-13375
> URL: https://issues.apache.org/jira/browse/HADOOP-13375
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>
> h5. Error Message
> bq. expected:<1> but was:<0>
> h5. Stacktrace
> {quote}
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.security.TestGroupsCaching.testBackgroundRefreshCounters(TestGroupsCaching.java:638)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS for the authentication failure of proxy user

2016-08-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431930#comment-15431930
 ] 

Hudson commented on HADOOP-13526:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10322 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10322/])
HADOOP-13526. Add detailed logging in KMS for the authentication failure (xiao: 
rev 4070caad70db49b50554088d29ac2fbc7ba62a0a)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationFilter.java


> Add detailed logging in KMS for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13526.patch, HADOOP-13526.patch.1, 
> HADOOP-13526.patch.2, HADOOP-13526.patch.3
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13526) Add detailed logging in KMS for the authentication failure of proxy user

2016-08-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13526:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha1
   2.9.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk and branch-2.

Thank you [~sacharya] for the contribution, and [~anu] for the review!

> Add detailed logging in KMS for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13526.patch, HADOOP-13526.patch.1, 
> HADOOP-13526.patch.2, HADOOP-13526.patch.3
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13526) Add detailed logging in KMS for the authentication failure of proxy user

2016-08-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13526:
---
Summary: Add detailed logging in KMS for the authentication failure of 
proxy user  (was: Add detailed logging in KMS log for the authentication 
failure of proxy user)

> Add detailed logging in KMS for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch, HADOOP-13526.patch.1, 
> HADOOP-13526.patch.2, HADOOP-13526.patch.3
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13533) User cannot set empty HADOOP_SSH_OPTS environment variable option

2016-08-22 Thread Albert Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Albert Chu updated HADOOP-13533:

Description: 
In hadoop-functions.sh in the hadoop_basic_init function there is this 
initialization of HADOOP_SSH_OPTS:

{noformat}
HADOOP_SSH_OPTS=${HADOOP_SSH_OPTS:-"-o BatchMode=yes -o 
StrictHostKeyChecking=no -o ConnectTimeout=10s"}
{noformat}

I believe this parameter substitution is a bug.  While most of the environment 
variables set in the function are generally required for functionality 
(HADOOP_LOG_DIR, HADOOP_LOGFILE, etc.) I don't believe HADOOP_SSH_OPTS is one 
of them.  If the user wishes to set HADOOP_SSH_OPTS to an empty string (i.e. 
HADOOP_SSH_OPTS="") they should be able to.  But instead, this is requiring 
HADOOP_SSH_OPTS to always be set to something.

So I think the 

{noformat}
":-"
{noformat}

in the above should be

{noformat}
"-"
{noformat}

Github pull request to be sent shortly.

  was:
In hadoop-functions.sh in the hadoop_basic_init function there is this 
initialization of HADOOP_SSH_OPTS:

{noformat}
HADOOP_SSH_OPTS=${HADOOP_SSH_OPTS:-"-o BatchMode=yes -o 
StrictHostKeyChecking=no -o ConnectTimeout=10s"}
{noformat}

I believe this parameter substitution is a bug.  While most of the environment 
variables set in the function are generally required for functionality 
(HADOOP_LOG_DIR, HADOOP_LOGFILE, etc.) I don't believe HADOOP_SSH_OPTS is one 
of them.  If the user wishes to set HADOOP_SSH_OPTS to an empty string (i.e. 
HADOOP_SSH_OPTS="") they should be able to.  But instead, this is requiring 
HADOOP_SSH_OPTS to always be set to something.

So I think the ":-" in the above should be "-".  Github pull request to be sent 
shortly.


> User cannot set empty HADOOP_SSH_OPTS environment variable option
> -
>
> Key: HADOOP-13533
> URL: https://issues.apache.org/jira/browse/HADOOP-13533
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Albert Chu
>Priority: Minor
>
> In hadoop-functions.sh in the hadoop_basic_init function there is this 
> initialization of HADOOP_SSH_OPTS:
> {noformat}
> HADOOP_SSH_OPTS=${HADOOP_SSH_OPTS:-"-o BatchMode=yes -o 
> StrictHostKeyChecking=no -o ConnectTimeout=10s"}
> {noformat}
> I believe this parameter substitution is a bug.  While most of the 
> environment variables set in the function are generally required for 
> functionality (HADOOP_LOG_DIR, HADOOP_LOGFILE, etc.) I don't believe 
> HADOOP_SSH_OPTS is one of them.  If the user wishes to set HADOOP_SSH_OPTS to 
> an empty string (i.e. HADOOP_SSH_OPTS="") they should be able to.  But 
> instead, this is requiring HADOOP_SSH_OPTS to always be set to something.
> So I think the 
> {noformat}
> ":-"
> {noformat}
> in the above should be
> {noformat}
> "-"
> {noformat}
> Github pull request to be sent shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13533) User cannot set empty HADOOP_SSH_OPTS environment variable option

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431874#comment-15431874
 ] 

ASF GitHub Bot commented on HADOOP-13533:
-

GitHub user chu11 opened a pull request:

https://github.com/apache/hadoop/pull/121

HADOOP-13533: Do not require user to set HADOOP_SSH_OPTS to a non-null 
string, allow

setting of an empty string.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chu11/hadoop HADOOP-13533

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/121.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #121


commit 952f515ca5251f064c62187a4268d1359bb271f7
Author: Albert Chu 
Date:   2016-08-23T00:41:41Z

Do not require user to set HADOOP_SSH_OPTS to a non-null string, allow
setting of an empty string.




> User cannot set empty HADOOP_SSH_OPTS environment variable option
> -
>
> Key: HADOOP-13533
> URL: https://issues.apache.org/jira/browse/HADOOP-13533
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Albert Chu
>Priority: Minor
>
> In hadoop-functions.sh in the hadoop_basic_init function there is this 
> initialization of HADOOP_SSH_OPTS:
> {noformat}
> HADOOP_SSH_OPTS=${HADOOP_SSH_OPTS:-"-o BatchMode=yes -o 
> StrictHostKeyChecking=no -o ConnectTimeout=10s"}
> {noformat}
> I believe this parameter substitution is a bug.  While most of the 
> environment variables set in the function are generally required for 
> functionality (HADOOP_LOG_DIR, HADOOP_LOGFILE, etc.) I don't believe 
> HADOOP_SSH_OPTS is one of them.  If the user wishes to set HADOOP_SSH_OPTS to 
> an empty string (i.e. HADOOP_SSH_OPTS="") they should be able to.  But 
> instead, this is requiring HADOOP_SSH_OPTS to always be set to something.
> So I think the ":-" in the above should be "-".  Github pull request to be 
> sent shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13533) User cannot set empty HADOOP_SSH_OPTS environment variable option

2016-08-22 Thread Albert Chu (JIRA)
Albert Chu created HADOOP-13533:
---

 Summary: User cannot set empty HADOOP_SSH_OPTS environment 
variable option
 Key: HADOOP-13533
 URL: https://issues.apache.org/jira/browse/HADOOP-13533
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0-alpha2
Reporter: Albert Chu
Priority: Minor


In hadoop-functions.sh in the hadoop_basic_init function there is this 
initialization of HADOOP_SSH_OPTS:

{noformat}
HADOOP_SSH_OPTS=${HADOOP_SSH_OPTS:-"-o BatchMode=yes -o 
StrictHostKeyChecking=no -o ConnectTimeout=10s"}
{noformat}

I believe this parameter substitution is a bug.  While most of the environment 
variables set in the function are generally required for functionality 
(HADOOP_LOG_DIR, HADOOP_LOGFILE, etc.) I don't believe HADOOP_SSH_OPTS is one 
of them.  If the user wishes to set HADOOP_SSH_OPTS to an empty string (i.e. 
HADOOP_SSH_OPTS="") they should be able to.  But instead, this is requiring 
HADOOP_SSH_OPTS to always be set to something.

So I think the ":-" in the above should be "-".  Github pull request to be sent 
shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431847#comment-15431847
 ] 

Anu Engineer commented on HADOOP-13526:
---

+1, on patch 3. Thanks for the patch [~sacharya]. Thanks for committing this 
[~xiaochen]

> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch, HADOOP-13526.patch.1, 
> HADOOP-13526.patch.2, HADOOP-13526.patch.3
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431820#comment-15431820
 ] 

Xiao Chen commented on HADOOP-13526:


+1 on patch 3, will commit later today if no objections.

> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch, HADOOP-13526.patch.1, 
> HADOOP-13526.patch.2, HADOOP-13526.patch.3
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12554) Swift client to read credentials from a credential provider

2016-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431808#comment-15431808
 ] 

Hadoop QA commented on HADOOP-12554:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824946/HADOOP-12554.002.patch
 |
| JIRA Issue | HADOOP-12554 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 530bd4220c40 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3ca4d6d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10340/testReport/ |
| modules | C: hadoop-tools/hadoop-openstack U: hadoop-tools/hadoop-openstack |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10340/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Swift client to read credentials from a credential provider
> ---
>
> Key: HADOOP-12554
> URL: https://issues.apache.org/jira/browse/HADOOP-12554
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: ramtin
>Priority: Minor
> Attachments: HADOOP-12554.001.patch, HADOOP-12554.002.patch
>
>
> As HADOOP-12548 is going to do for s3, Swift should be reading cre

[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431803#comment-15431803
 ] 

Hadoop QA commented on HADOOP-13526:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824940/HADOOP-13526.patch.3 |
| JIRA Issue | HADOOP-13526 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 519b9b3c4258 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3ca4d6d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10339/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10339/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL:

[jira] [Updated] (HADOOP-12554) Swift client to read credentials from a credential provider

2016-08-22 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HADOOP-12554:

Attachment: HADOOP-12554.002.patch

> Swift client to read credentials from a credential provider
> ---
>
> Key: HADOOP-12554
> URL: https://issues.apache.org/jira/browse/HADOOP-12554
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: ramtin
>Priority: Minor
> Attachments: HADOOP-12554.001.patch, HADOOP-12554.002.patch
>
>
> As HADOOP-12548 is going to do for s3, Swift should be reading credentials, 
> particularly passwords, from a credential provider. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Suraj Acharya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suraj Acharya updated HADOOP-13526:
---
Attachment: HADOOP-13526.patch.3

Fixed Checkstyle of '('

> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch, HADOOP-13526.patch.1, 
> HADOOP-13526.patch.2, HADOOP-13526.patch.3
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431736#comment-15431736
 ] 

Hadoop QA commented on HADOOP-13526:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824931/HADOOP-13526.patch.2 |
| JIRA Issue | HADOOP-13526 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 31adeffe46b0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3ca4d6d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10338/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10338/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10338/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message w

[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431706#comment-15431706
 ] 

Hadoop QA commented on HADOOP-13526:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
46s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824928/HADOOP-13526.patch.1 |
| JIRA Issue | HADOOP-13526 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 482a9efd1fce 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 22fc46d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10337/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10337/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10337/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message w

[jira] [Updated] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Suraj Acharya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suraj Acharya updated HADOOP-13526:
---
Attachment: HADOOP-13526.patch.2

> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch, HADOOP-13526.patch.1, 
> HADOOP-13526.patch.2
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Suraj Acharya (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431683#comment-15431683
 ] 

Suraj Acharya commented on HADOOP-13526:


[~anu] and [~xiaochen]
Done :)
Sorry about it.


> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch, HADOOP-13526.patch.1, 
> HADOOP-13526.patch.2
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13487) Hadoop KMS should load old delegation tokens from Zookeeper on startup

2016-08-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13487:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk, branch-2 and branch-2.8.

Thanks [~axenol] for reporting this issue, and [~eddyxu] for reviewing!

> Hadoop KMS should load old delegation tokens from Zookeeper on startup
> --
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Alex Ivanov
>Assignee: Xiao Chen
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13487.01.patch, HADOOP-13487.02.patch, 
> HADOOP-13487.03.patch, HADOOP-13487.04.patch, HADOOP-13487.05.patch
>
>
> Configuration:
> CDH 5.5.1 (Hadoop 2.6+)
> KMS configured to store delegation tokens in Zookeeper
> DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties
> Findings:
> It seems to me delegation tokens never get cleaned up from Zookeeper past 
> their renewal date. I can see in the logs that the removal thread is started 
> with the expected interval:
> {code}
> 2016-08-11 08:15:24,511 INFO  AbstractDelegationTokenSecretManager - Starting 
> expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
> {code}
> However, I don't see any delegation token removals, indicated by the 
> following log message:
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager 
> --> removeStoredToken(TokenIdent ident), line 769 [CDH]
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Removing ZKDTSMDelegationToken_"
>   + ident.getSequenceNumber());
> }
> {code}
> Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't 
> get cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431667#comment-15431667
 ] 

Xiao Chen commented on HADOOP-13526:


Hi Suraj,

Sorry for not being clear. The checksyle output also alerts about the line 
length, could you fix that so we have a green pre-commit?
Thank you.

> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch, HADOOP-13526.patch.1
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431668#comment-15431668
 ] 

Anu Engineer commented on HADOOP-13526:
---

There is one more checkstyle warning about line length being more than 80. 
Could you please fix that too ? +1 after that.



> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch, HADOOP-13526.patch.1
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Suraj Acharya (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431664#comment-15431664
 ] 

Suraj Acharya commented on HADOOP-13526:


I fixed it by declaring it as final as [~xiaochen] mentioned.

> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch, HADOOP-13526.patch.1
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13487) Hadoop KMS should load old delegation tokens from Zookeeper on startup

2016-08-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431663#comment-15431663
 ] 

Xiao Chen commented on HADOOP-13487:


The changes are trivial and test-only after Eddy's +1, so I'm committing this 
shortly.



> Hadoop KMS should load old delegation tokens from Zookeeper on startup
> --
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Alex Ivanov
>Assignee: Xiao Chen
> Attachments: HADOOP-13487.01.patch, HADOOP-13487.02.patch, 
> HADOOP-13487.03.patch, HADOOP-13487.04.patch, HADOOP-13487.05.patch
>
>
> Configuration:
> CDH 5.5.1 (Hadoop 2.6+)
> KMS configured to store delegation tokens in Zookeeper
> DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties
> Findings:
> It seems to me delegation tokens never get cleaned up from Zookeeper past 
> their renewal date. I can see in the logs that the removal thread is started 
> with the expected interval:
> {code}
> 2016-08-11 08:15:24,511 INFO  AbstractDelegationTokenSecretManager - Starting 
> expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
> {code}
> However, I don't see any delegation token removals, indicated by the 
> following log message:
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager 
> --> removeStoredToken(TokenIdent ident), line 769 [CDH]
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Removing ZKDTSMDelegationToken_"
>   + ident.getSequenceNumber());
> }
> {code}
> Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't 
> get cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Suraj Acharya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suraj Acharya updated HADOOP-13526:
---
Attachment: HADOOP-13526.patch.1

> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch, HADOOP-13526.patch.1
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431638#comment-15431638
 ] 

Xiao Chen commented on HADOOP-13526:


Thanks [~sacharya] for reporting and fixing the issue, and [~anu] for 
reviewing. I agree current patch is more consistent with the existing code, and 
no test is needed.

Could you fix the checkstyle? For the naming, you can declare {{LOG}} as final 
to make checkstyle happy. (Credit to [~andrew.wang] for the tip!). +1 pending 
that.

> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13487) Hadoop KMS should load old delegation tokens from Zookeeper on startup

2016-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431629#comment-15431629
 ] 

Hadoop QA commented on HADOOP-13487:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
51s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824920/HADOOP-13487.05.patch 
|
| JIRA Issue | HADOOP-13487 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8528519634cc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dc7a1c5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10336/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10336/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Hadoop KMS should load old delegation tokens from Zookeeper on startup
> --
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Alex Ivanov
>Assignee: Xiao Chen
> Attachments: HADOOP-13487.01.patch, HADOOP-13487.02.patch, 
> HADOOP-13487.03.patch, HADOOP-13487.04.patch, HADOOP-13487.05.patch
>
>
> Configu

[jira] [Commented] (HADOOP-11890) Uber-JIRA: Hadoop should support IPv6

2016-08-22 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431628#comment-15431628
 ] 

Elliott Clark commented on HADOOP-11890:


Let me get a patch up with those comments. Thanks

bq.Where is HADOOP_ALLOW_IPV6 documented?
Where do you think would be the best place to document it ?

> Uber-JIRA: Hadoop should support IPv6
> -
>
> Key: HADOOP-11890
> URL: https://issues.apache.org/jira/browse/HADOOP-11890
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nate Edel
>Assignee: Nate Edel
>  Labels: ipv6
>
> Hadoop currently treats IPv6 as unsupported.  Track related smaller issues to 
> support IPv6.
> (Current case here is mainly HBase on HDFS, so any suggestions about other 
> test cases/workload are really appreciated.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10776) Open up already widely-used APIs for delegation-token fetching & renewal to ecosystem projects

2016-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431591#comment-15431591
 ] 

Hadoop QA commented on HADOOP-10776:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 285 unchanged - 3 fixed = 287 total (was 288) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-common-project_hadoop-common generated 5 new + 
0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 42s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824914/HADOOP-10776-20160822.txt
 |
| JIRA Issue | HADOOP-10776 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d66198e913f4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dc7a1c5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10335/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10335/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/Pr

[jira] [Commented] (HADOOP-10776) Open up already widely-used APIs for delegation-token fetching & renewal to ecosystem projects

2016-08-22 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431569#comment-15431569
 ] 

Josh Elser commented on HADOOP-10776:
-

Oops! Sorry for the chatter. Should have gone to the correct version instead of 
letting my editor just show me a recent one.

> Open up already widely-used APIs for delegation-token fetching & renewal to 
> ecosystem projects
> --
>
> Key: HADOOP-10776
> URL: https://issues.apache.org/jira/browse/HADOOP-10776
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
> Attachments: HADOOP-10776-20160822.txt
>
>
> Storm would like to be able to fetch delegation tokens and forward them on to 
> running topologies so that they can access HDFS (STORM-346).  But to do so we 
> need to open up access to some of APIs. 
> Most notably FileSystem.addDelegationTokens(), Token.renew, 
> Credentials.getAllTokens, and UserGroupInformation but there may be others.
> At a minimum adding in storm to the list of allowed API users. But ideally 
> making them public. Restricting access to such important functionality to 
> just MR really makes secure HDFS inaccessible to anything except MR, or tools 
> that reuse MR input formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431560#comment-15431560
 ] 

Anu Engineer commented on HADOOP-13526:
---

[~sacharya] Thanks for the explanation. That makes sense. Let us leave the code 
as you proposed. Would you please fix these 2 checkstyle warnings, once they 
are fixed I think we can commit this. Here is the checkstyle warnings URL 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10328/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt




> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13487) Hadoop KMS should load old delegation tokens from Zookeeper on startup

2016-08-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431551#comment-15431551
 ] 

Xiao Chen edited comment on HADOOP-13487 at 8/22/16 8:31 PM:
-

I enhanced the test to be more robust regarding token cancellation in patch 5. 
Had 100 runs locally and all passed.


was (Author: xiaochen):
I enhanced the test to be more robust regarding token cancellation in patch 5.

> Hadoop KMS should load old delegation tokens from Zookeeper on startup
> --
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Alex Ivanov
>Assignee: Xiao Chen
> Attachments: HADOOP-13487.01.patch, HADOOP-13487.02.patch, 
> HADOOP-13487.03.patch, HADOOP-13487.04.patch, HADOOP-13487.05.patch
>
>
> Configuration:
> CDH 5.5.1 (Hadoop 2.6+)
> KMS configured to store delegation tokens in Zookeeper
> DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties
> Findings:
> It seems to me delegation tokens never get cleaned up from Zookeeper past 
> their renewal date. I can see in the logs that the removal thread is started 
> with the expected interval:
> {code}
> 2016-08-11 08:15:24,511 INFO  AbstractDelegationTokenSecretManager - Starting 
> expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
> {code}
> However, I don't see any delegation token removals, indicated by the 
> following log message:
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager 
> --> removeStoredToken(TokenIdent ident), line 769 [CDH]
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Removing ZKDTSMDelegationToken_"
>   + ident.getSequenceNumber());
> }
> {code}
> Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't 
> get cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13487) Hadoop KMS should load old delegation tokens from Zookeeper on startup

2016-08-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431551#comment-15431551
 ] 

Xiao Chen edited comment on HADOOP-13487 at 8/22/16 8:28 PM:
-

I enhanced the test to be more robust regarding token cancellation in patch 5.


was (Author: xiaochen):
I enhanced the test to be more robust regarding token cancellation.

> Hadoop KMS should load old delegation tokens from Zookeeper on startup
> --
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Alex Ivanov
>Assignee: Xiao Chen
> Attachments: HADOOP-13487.01.patch, HADOOP-13487.02.patch, 
> HADOOP-13487.03.patch, HADOOP-13487.04.patch, HADOOP-13487.05.patch
>
>
> Configuration:
> CDH 5.5.1 (Hadoop 2.6+)
> KMS configured to store delegation tokens in Zookeeper
> DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties
> Findings:
> It seems to me delegation tokens never get cleaned up from Zookeeper past 
> their renewal date. I can see in the logs that the removal thread is started 
> with the expected interval:
> {code}
> 2016-08-11 08:15:24,511 INFO  AbstractDelegationTokenSecretManager - Starting 
> expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
> {code}
> However, I don't see any delegation token removals, indicated by the 
> following log message:
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager 
> --> removeStoredToken(TokenIdent ident), line 769 [CDH]
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Removing ZKDTSMDelegationToken_"
>   + ident.getSequenceNumber());
> }
> {code}
> Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't 
> get cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13487) Hadoop KMS should load old delegation tokens from Zookeeper on startup

2016-08-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13487:
---
Attachment: HADOOP-13487.05.patch

I enhanced the test to be more robust regarding token cancellation.

> Hadoop KMS should load old delegation tokens from Zookeeper on startup
> --
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Alex Ivanov
>Assignee: Xiao Chen
> Attachments: HADOOP-13487.01.patch, HADOOP-13487.02.patch, 
> HADOOP-13487.03.patch, HADOOP-13487.04.patch, HADOOP-13487.05.patch
>
>
> Configuration:
> CDH 5.5.1 (Hadoop 2.6+)
> KMS configured to store delegation tokens in Zookeeper
> DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties
> Findings:
> It seems to me delegation tokens never get cleaned up from Zookeeper past 
> their renewal date. I can see in the logs that the removal thread is started 
> with the expected interval:
> {code}
> 2016-08-11 08:15:24,511 INFO  AbstractDelegationTokenSecretManager - Starting 
> expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
> {code}
> However, I don't see any delegation token removals, indicated by the 
> following log message:
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager 
> --> removeStoredToken(TokenIdent ident), line 769 [CDH]
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Removing ZKDTSMDelegationToken_"
>   + ident.getSequenceNumber());
> }
> {code}
> Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't 
> get cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13487) Hadoop KMS should load old delegation tokens from Zookeeper on startup

2016-08-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431545#comment-15431545
 ] 

Xiao Chen commented on HADOOP-13487:


Hi [~axenol],
Yes, the workflow works, because after restart, although the secret manager 
doesn't have the token in cache ({{currentTokens}}, it will fall back to read 
from zk. 
([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java#L616]).

The problem is, the token removal thread is only checking the in-memory cache. 
So if there's an old token in ZK and nobody is using it, it will not be loaded 
to {{currentTokens}} for the removal thread to process. 

Also, since we're already loading {{PathChildrenCache}} for tokens and master 
keys at startup, I think syncing the in-memory cache is the right thing to do.

> Hadoop KMS should load old delegation tokens from Zookeeper on startup
> --
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Alex Ivanov
>Assignee: Xiao Chen
> Attachments: HADOOP-13487.01.patch, HADOOP-13487.02.patch, 
> HADOOP-13487.03.patch, HADOOP-13487.04.patch
>
>
> Configuration:
> CDH 5.5.1 (Hadoop 2.6+)
> KMS configured to store delegation tokens in Zookeeper
> DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties
> Findings:
> It seems to me delegation tokens never get cleaned up from Zookeeper past 
> their renewal date. I can see in the logs that the removal thread is started 
> with the expected interval:
> {code}
> 2016-08-11 08:15:24,511 INFO  AbstractDelegationTokenSecretManager - Starting 
> expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
> {code}
> However, I don't see any delegation token removals, indicated by the 
> following log message:
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager 
> --> removeStoredToken(TokenIdent ident), line 769 [CDH]
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Removing ZKDTSMDelegationToken_"
>   + ident.getSequenceNumber());
> }
> {code}
> Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't 
> get cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10776) Open up already widely-used APIs for delegation-token fetching & renewal to ecosystem projects

2016-08-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431544#comment-15431544
 ] 

Chris Nauroth commented on HADOOP-10776:


Hello [~elserj].  {{SecretManager}} is {{Public}} and {{Evolving}} in 
branch-2.8, so we won't need a change there.  (That's an easy thing to miss in 
diff-based reviews like this.)  Thank you for your code review.

> Open up already widely-used APIs for delegation-token fetching & renewal to 
> ecosystem projects
> --
>
> Key: HADOOP-10776
> URL: https://issues.apache.org/jira/browse/HADOOP-10776
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
> Attachments: HADOOP-10776-20160822.txt
>
>
> Storm would like to be able to fetch delegation tokens and forward them on to 
> running topologies so that they can access HDFS (STORM-346).  But to do so we 
> need to open up access to some of APIs. 
> Most notably FileSystem.addDelegationTokens(), Token.renew, 
> Credentials.getAllTokens, and UserGroupInformation but there may be others.
> At a minimum adding in storm to the list of allowed API users. But ideally 
> making them public. Restricting access to such important functionality to 
> just MR really makes secure HDFS inaccessible to anything except MR, or tools 
> that reuse MR input formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Suraj Acharya (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431543#comment-15431543
 ] 

Suraj Acharya commented on HADOOP-13526:


@anu i tried to emulate the model present in {{AuthenticationFilter.java}}.
[Here|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L567-L571]
 we can see that the debug level prints to Warn. I was not sure but I 
maintained the consistency so that the user is not confused in the behaviour 
pattern. 

> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10776) Open up already widely-used APIs for delegation-token fetching & renewal to ecosystem projects

2016-08-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431540#comment-15431540
 ] 

Chris Nauroth commented on HADOOP-10776:


[~vinodkv], thank you for picking this up.  The patch looks good to me.  My 
only request is that you add a change in {{SecurityUtil}} to mark that one 
{{Public}} too.  That class already gets used a lot in other projects.  I'll be 
+1 after that change.

I think we'll want to review usage and annotations on the web auth stuff too, 
but this much is plenty to get in for a 2.8.0 release.

> Open up already widely-used APIs for delegation-token fetching & renewal to 
> ecosystem projects
> --
>
> Key: HADOOP-10776
> URL: https://issues.apache.org/jira/browse/HADOOP-10776
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
> Attachments: HADOOP-10776-20160822.txt
>
>
> Storm would like to be able to fetch delegation tokens and forward them on to 
> running topologies so that they can access HDFS (STORM-346).  But to do so we 
> need to open up access to some of APIs. 
> Most notably FileSystem.addDelegationTokens(), Token.renew, 
> Credentials.getAllTokens, and UserGroupInformation but there may be others.
> At a minimum adding in storm to the list of allowed API users. But ideally 
> making them public. Restricting access to such important functionality to 
> just MR really makes secure HDFS inaccessible to anything except MR, or tools 
> that reuse MR input formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10776) Open up already widely-used APIs for delegation-token fetching & renewal to ecosystem projects

2016-08-22 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431538#comment-15431538
 ] 

Josh Elser commented on HADOOP-10776:
-

[~vinodkv], was avoiding o.a.h.s.token.SecretManager (and only opening up 
AbstractDelegationTokenSecretManager) intentional? A quick grep on one 
downstream project where I wired up delegation support shows that I had used 
SecretManager directly (which can probably be assumed that I copied it from 
another project).

Although, if AbstractDelegationTokenSecretManager is Public, there are still 
some abstract methods on SecretManager that I'd need to implement when 
extending AbstractDelegationTokenSecretManager (which are still LimitedPrivate).

> Open up already widely-used APIs for delegation-token fetching & renewal to 
> ecosystem projects
> --
>
> Key: HADOOP-10776
> URL: https://issues.apache.org/jira/browse/HADOOP-10776
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
> Attachments: HADOOP-10776-20160822.txt
>
>
> Storm would like to be able to fetch delegation tokens and forward them on to 
> running topologies so that they can access HDFS (STORM-346).  But to do so we 
> need to open up access to some of APIs. 
> Most notably FileSystem.addDelegationTokens(), Token.renew, 
> Credentials.getAllTokens, and UserGroupInformation but there may be others.
> At a minimum adding in storm to the list of allowed API users. But ideally 
> making them public. Restricting access to such important functionality to 
> just MR really makes secure HDFS inaccessible to anything except MR, or tools 
> that reuse MR input formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13487) Hadoop KMS should load old delegation tokens from Zookeeper on startup

2016-08-22 Thread Alex Ivanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431535#comment-15431535
 ] 

Alex Ivanov commented on HADOOP-13487:
--

Thank you for submitting the patch, [~xiaochen]. Can you please clarify why 
this loading of delegation tokens/keys is necessary? In my experience, the 
following workflow works, which gave me the impression that tokens from ZK are 
loaded in cache upon KMS start-up:
1. Create a KMS delegation token
2. Restart KMS
3. Using same delegation token to authenticate still works


> Hadoop KMS should load old delegation tokens from Zookeeper on startup
> --
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Alex Ivanov
>Assignee: Xiao Chen
> Attachments: HADOOP-13487.01.patch, HADOOP-13487.02.patch, 
> HADOOP-13487.03.patch, HADOOP-13487.04.patch
>
>
> Configuration:
> CDH 5.5.1 (Hadoop 2.6+)
> KMS configured to store delegation tokens in Zookeeper
> DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties
> Findings:
> It seems to me delegation tokens never get cleaned up from Zookeeper past 
> their renewal date. I can see in the logs that the removal thread is started 
> with the expected interval:
> {code}
> 2016-08-11 08:15:24,511 INFO  AbstractDelegationTokenSecretManager - Starting 
> expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
> {code}
> However, I don't see any delegation token removals, indicated by the 
> following log message:
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager 
> --> removeStoredToken(TokenIdent ident), line 769 [CDH]
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Removing ZKDTSMDelegationToken_"
>   + ident.getSequenceNumber());
> }
> {code}
> Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't 
> get cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10776) Open up already widely-used APIs for delegation-token fetching & renewal to ecosystem projects

2016-08-22 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431512#comment-15431512
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-10776:
--

And /cc'ing [~steve_l] too.

> Open up already widely-used APIs for delegation-token fetching & renewal to 
> ecosystem projects
> --
>
> Key: HADOOP-10776
> URL: https://issues.apache.org/jira/browse/HADOOP-10776
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
> Attachments: HADOOP-10776-20160822.txt
>
>
> Storm would like to be able to fetch delegation tokens and forward them on to 
> running topologies so that they can access HDFS (STORM-346).  But to do so we 
> need to open up access to some of APIs. 
> Most notably FileSystem.addDelegationTokens(), Token.renew, 
> Credentials.getAllTokens, and UserGroupInformation but there may be others.
> At a minimum adding in storm to the list of allowed API users. But ideally 
> making them public. Restricting access to such important functionality to 
> just MR really makes secure HDFS inaccessible to anything except MR, or tools 
> that reuse MR input formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10776) Open up already widely-used APIs for delegation-token fetching & renewal to ecosystem projects

2016-08-22 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-10776:
-
Summary: Open up already widely-used APIs for delegation-token fetching & 
renewal to ecosystem projects  (was: Open up Delegation token fetching and 
renewal to STORM (Possibly others))

> Open up already widely-used APIs for delegation-token fetching & renewal to 
> ecosystem projects
> --
>
> Key: HADOOP-10776
> URL: https://issues.apache.org/jira/browse/HADOOP-10776
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
> Attachments: HADOOP-10776-20160822.txt
>
>
> Storm would like to be able to fetch delegation tokens and forward them on to 
> running topologies so that they can access HDFS (STORM-346).  But to do so we 
> need to open up access to some of APIs. 
> Most notably FileSystem.addDelegationTokens(), Token.renew, 
> Credentials.getAllTokens, and UserGroupInformation but there may be others.
> At a minimum adding in storm to the list of allowed API users. But ideally 
> making them public. Restricting access to such important functionality to 
> just MR really makes secure HDFS inaccessible to anything except MR, or tools 
> that reuse MR input formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10776) Open up Delegation token fetching and renewal to STORM (Possibly others)

2016-08-22 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-10776:
-
Assignee: Vinod Kumar Vavilapalli
  Status: Patch Available  (was: Open)

> Open up Delegation token fetching and renewal to STORM (Possibly others)
> 
>
> Key: HADOOP-10776
> URL: https://issues.apache.org/jira/browse/HADOOP-10776
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
>     Attachments: HADOOP-10776-20160822.txt
>
>
> Storm would like to be able to fetch delegation tokens and forward them on to 
> running topologies so that they can access HDFS (STORM-346).  But to do so we 
> need to open up access to some of APIs. 
> Most notably FileSystem.addDelegationTokens(), Token.renew, 
> Credentials.getAllTokens, and UserGroupInformation but there may be others.
> At a minimum adding in storm to the list of allowed API users. But ideally 
> making them public. Restricting access to such important functionality to 
> just MR really makes secure HDFS inaccessible to anything except MR, or tools 
> that reuse MR input formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10776) Open up Delegation token fetching and renewal to STORM (Possibly others)

2016-08-22 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-10776:
-
Attachment: HADOOP-10776-20160822.txt

Taking a quick crack at making some of the already very widely used security 
related class public.

The patch makes the following public
 - Classes: AccessControlException, Credentials, UserGroupInformation, 
AuthorizationException, Token.TrivialRenewer, 
AbstractDelegationTokenIdentifier, AbstractDelegationTokenSecretManager
 - Methods: FileSystem.getCanonicalServiceName(), 
FileSystem.addDelegationTokens()

Couple of general notes
 - I'd like to skip the evolving vs public discussion for now and focus only on 
visibility - so I just marked everything evolving.
 - I did a quick search and obviously there are a lot more classes that need 
more careful thinking. Unless I've missed some of the very obvious ones, I'd 
like to make progress on getting the current ones done first.

[~revans2], [~cnauroth], [~arpitagarwal], can one or more of you quickly look 
at this? Shouldn't take more than 5-10 minutes.

> Open up Delegation token fetching and renewal to STORM (Possibly others)
> 
>
> Key: HADOOP-10776
> URL: https://issues.apache.org/jira/browse/HADOOP-10776
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>Priority: Blocker
> Attachments: HADOOP-10776-20160822.txt
>
>
> Storm would like to be able to fetch delegation tokens and forward them on to 
> running topologies so that they can access HDFS (STORM-346).  But to do so we 
> need to open up access to some of APIs. 
> Most notably FileSystem.addDelegationTokens(), Token.renew, 
> Credentials.getAllTokens, and UserGroupInformation but there may be others.
> At a minimum adding in storm to the list of allowed API users. But ideally 
> making them public. Restricting access to such important functionality to 
> just MR really makes secure HDFS inaccessible to anything except MR, or tools 
> that reuse MR input formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13447) Refactor S3AFileSystem to support introduction of separate metadata repository and tests.

2016-08-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13447:
---
Attachment: HADOOP-13447.005.patch

I'm attaching patch 005.  This has some small changes in 
{{TestS3AGetFileStatus}} for compatibility with Java 7 so that we can commit 
the same patch to branch-2.  The {{Collections#emptyList}} calls need me to 
pass an explicit type argument when compiling for Java 7.

I have clicked Cancel Patch for now, because we need to commit HADOOP-13446 
first before getting a final pre-commit run on this one.

> Refactor S3AFileSystem to support introduction of separate metadata 
> repository and tests.
> -
>
> Key: HADOOP-13447
> URL: https://issues.apache.org/jira/browse/HADOOP-13447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13447-HADOOP-13446.001.patch, 
> HADOOP-13447-HADOOP-13446.002.patch, HADOOP-13447.003.patch, 
> HADOOP-13447.004.patch, HADOOP-13447.005.patch
>
>
> The scope of this issue is to refactor the existing {{S3AFileSystem}} into 
> multiple coordinating classes.  The goal of this refactoring is to separate 
> the {{FileSystem}} API binding from the AWS SDK integration, make code 
> maintenance easier while we're making changes for S3Guard, and make it easier 
> to mock some implementation details so that tests can simulate eventual 
> consistency behavior in a deterministic way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13447) Refactor S3AFileSystem to support introduction of separate metadata repository and tests.

2016-08-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13447:
---
Status: Open  (was: Patch Available)

> Refactor S3AFileSystem to support introduction of separate metadata 
> repository and tests.
> -
>
> Key: HADOOP-13447
> URL: https://issues.apache.org/jira/browse/HADOOP-13447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13447-HADOOP-13446.001.patch, 
> HADOOP-13447-HADOOP-13446.002.patch, HADOOP-13447.003.patch, 
> HADOOP-13447.004.patch
>
>
> The scope of this issue is to refactor the existing {{S3AFileSystem}} into 
> multiple coordinating classes.  The goal of this refactoring is to separate 
> the {{FileSystem}} API binding from the AWS SDK integration, make code 
> maintenance easier while we're making changes for S3Guard, and make it easier 
> to mock some implementation details so that tests can simulate eventual 
> consistency behavior in a deterministic way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13487) Hadoop KMS should load old delegation tokens from Zookeeper on startup

2016-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431459#comment-15431459
 ] 

Hadoop QA commented on HADOOP-13487:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 41s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824906/HADOOP-13487.04.patch 
|
| JIRA Issue | HADOOP-13487 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8bd26e8d9d45 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dc7a1c5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10334/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10334/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10334/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Hadoop KMS should load old delegation tokens from Zookeeper on startup
> --
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Component

[jira] [Updated] (HADOOP-13487) Hadoop KMS should load old delegation tokens from Zookeeper on startup

2016-08-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13487:
---
Summary: Hadoop KMS should load old delegation tokens from Zookeeper on 
startup  (was: Hadoop KMS doesn't clean up old delegation tokens stored in 
Zookeeper)

> Hadoop KMS should load old delegation tokens from Zookeeper on startup
> --
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Alex Ivanov
>Assignee: Xiao Chen
> Attachments: HADOOP-13487.01.patch, HADOOP-13487.02.patch, 
> HADOOP-13487.03.patch, HADOOP-13487.04.patch
>
>
> Configuration:
> CDH 5.5.1 (Hadoop 2.6+)
> KMS configured to store delegation tokens in Zookeeper
> DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties
> Findings:
> It seems to me delegation tokens never get cleaned up from Zookeeper past 
> their renewal date. I can see in the logs that the removal thread is started 
> with the expected interval:
> {code}
> 2016-08-11 08:15:24,511 INFO  AbstractDelegationTokenSecretManager - Starting 
> expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
> {code}
> However, I don't see any delegation token removals, indicated by the 
> following log message:
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager 
> --> removeStoredToken(TokenIdent ident), line 769 [CDH]
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Removing ZKDTSMDelegationToken_"
>   + ident.getSequenceNumber());
> }
> {code}
> Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't 
> get cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13487) Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper

2016-08-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13487:
---
Attachment: HADOOP-13487.04.patch

Thanks Eddy!
I fixed the typo for checkstyle in patch 4 as we talked offline. Will commit 
after jenkins come back.

> Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper
> -
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Alex Ivanov
>Assignee: Xiao Chen
> Attachments: HADOOP-13487.01.patch, HADOOP-13487.02.patch, 
> HADOOP-13487.03.patch, HADOOP-13487.04.patch
>
>
> Configuration:
> CDH 5.5.1 (Hadoop 2.6+)
> KMS configured to store delegation tokens in Zookeeper
> DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties
> Findings:
> It seems to me delegation tokens never get cleaned up from Zookeeper past 
> their renewal date. I can see in the logs that the removal thread is started 
> with the expected interval:
> {code}
> 2016-08-11 08:15:24,511 INFO  AbstractDelegationTokenSecretManager - Starting 
> expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
> {code}
> However, I don't see any delegation token removals, indicated by the 
> following log message:
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager 
> --> removeStoredToken(TokenIdent ident), line 769 [CDH]
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Removing ZKDTSMDelegationToken_"
>   + ident.getSequenceNumber());
> }
> {code}
> Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't 
> get cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13446) Support running isolated unit tests separate from AWS integration tests.

2016-08-22 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431379#comment-15431379
 ] 

Aaron Fabbri commented on HADOOP-13446:
---

Reviewing trunk 006 patch. Comparing to 002 version I reviewed previously, 
similar to branch-2 patch:

- ITestS3AContractGetFileStatus: force multiple responses by lowering 
fs.s3a.paging.maximum to 2 (same as in latest branch-2 patch)
- Style fixes, comment improvements.
- Add credential provider test cases for unit and integration tests (invalid 
class)
- Move some deletion tests to be Integration tests
- Enhancements for ITestS3ADirectoryPerf

+1 on this patch as well (I'm only committer on branch though)

Shout if you want me to run these tests as well.



> Support running isolated unit tests separate from AWS integration tests.
> 
>
> Key: HADOOP-13446
> URL: https://issues.apache.org/jira/browse/HADOOP-13446
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13446-HADOOP-13345.001.patch, 
> HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch, 
> HADOOP-13446-branch-2.006.patch, HADOOP-13446.004.patch, 
> HADOOP-13446.005.patch, HADOOP-13446.006.patch
>
>
> Currently, the hadoop-aws module only runs Surefire if AWS credentials have 
> been configured.  This implies that all tests must run integrated with the 
> AWS back-end.  It also means that no tests run as part of ASF pre-commit.  
> This issue proposes for the hadoop-aws module to support running isolated 
> unit tests without integrating with AWS.  This will benefit S3Guard, because 
> we expect the need for isolated mock-based testing to simulate eventual 
> consistency behavior.  It also benefits hadoop-aws in general by allowing 
> pre-commit to do something more valuable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13446) Support running isolated unit tests separate from AWS integration tests.

2016-08-22 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431335#comment-15431335
 ] 

Aaron Fabbri edited comment on HADOOP-13446 at 8/22/16 6:26 PM:


Reviewing latest branch-2 patch.  Compared to the 002 patch I reviewed earlier:

- Moves S3 Contract tests to be integration tests.
- Improves some comments
- Adds some tests for credentials provider (invalid class)
- Fixes some code style issues
- Some enhancements for ITestS3ADirectoryPerf

I'm +1 on the branch-2 patch.  Will review trunk diff next.





was (Author: fabbri):
Reviewing latest branch-2 patch.  Compared to the 002 patch I reviewed earlier:

- Moves S3 Contract tests to be integration tests.
- Improves some comments
- Adds some tests for credentials provider (invalid class)
- Fixes some code style issues
- Some enhancements for ITestS3ADirectoryPerf
- Cleanup FileContext after yarn/TestS3A.java

I'm +1 on the branch-2 patch.  Will review trunk diff next.




> Support running isolated unit tests separate from AWS integration tests.
> 
>
> Key: HADOOP-13446
> URL: https://issues.apache.org/jira/browse/HADOOP-13446
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13446-HADOOP-13345.001.patch, 
> HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch, 
> HADOOP-13446-branch-2.006.patch, HADOOP-13446.004.patch, 
> HADOOP-13446.005.patch, HADOOP-13446.006.patch
>
>
> Currently, the hadoop-aws module only runs Surefire if AWS credentials have 
> been configured.  This implies that all tests must run integrated with the 
> AWS back-end.  It also means that no tests run as part of ASF pre-commit.  
> This issue proposes for the hadoop-aws module to support running isolated 
> unit tests without integrating with AWS.  This will benefit S3Guard, because 
> we expect the need for isolated mock-based testing to simulate eventual 
> consistency behavior.  It also benefits hadoop-aws in general by allowing 
> pre-commit to do something more valuable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431342#comment-15431342
 ] 

Anu Engineer commented on HADOOP-13526:
---

[~sacharya] Thanks for catching this and fix it. The patch looks good. One 
small q. should we change the level of the log based on if we have debugging 
turned on or not ? it looks like if we have Log level DEBUG turned on the 
failure will be logged at DEBUG level instead of a WARN ( I do see we log 
details of the failure with this debug statement). Should we log at the same 
WARN level even if we have debug turned on, but with the details of the 
exception  ? 


> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13446) Support running isolated unit tests separate from AWS integration tests.

2016-08-22 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431335#comment-15431335
 ] 

Aaron Fabbri commented on HADOOP-13446:
---

Reviewing latest branch-2 patch.  Compared to the 002 patch I reviewed earlier:

- Moves S3 Contract tests to be integration tests.
- Improves some comments
- Adds some tests for credentials provider (invalid class)
- Fixes some code style issues
- Some enhancements for ITestS3ADirectoryPerf
- Cleanup FileContext after yarn/TestS3A.java

I'm +1 on the branch-2 patch.  Will review trunk diff next.




> Support running isolated unit tests separate from AWS integration tests.
> 
>
> Key: HADOOP-13446
> URL: https://issues.apache.org/jira/browse/HADOOP-13446
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13446-HADOOP-13345.001.patch, 
> HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch, 
> HADOOP-13446-branch-2.006.patch, HADOOP-13446.004.patch, 
> HADOOP-13446.005.patch, HADOOP-13446.006.patch
>
>
> Currently, the hadoop-aws module only runs Surefire if AWS credentials have 
> been configured.  This implies that all tests must run integrated with the 
> AWS back-end.  It also means that no tests run as part of ASF pre-commit.  
> This issue proposes for the hadoop-aws module to support running isolated 
> unit tests without integrating with AWS.  This will benefit S3Guard, because 
> we expect the need for isolated mock-based testing to simulate eventual 
> consistency behavior.  It also benefits hadoop-aws in general by allowing 
> pre-commit to do something more valuable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13487) Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper

2016-08-22 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431323#comment-15431323
 ] 

Lei (Eddy) Xu commented on HADOOP-13487:


+1 thanks, [~xiaochen]

> Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper
> -
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Alex Ivanov
>Assignee: Xiao Chen
> Attachments: HADOOP-13487.01.patch, HADOOP-13487.02.patch, 
> HADOOP-13487.03.patch
>
>
> Configuration:
> CDH 5.5.1 (Hadoop 2.6+)
> KMS configured to store delegation tokens in Zookeeper
> DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties
> Findings:
> It seems to me delegation tokens never get cleaned up from Zookeeper past 
> their renewal date. I can see in the logs that the removal thread is started 
> with the expected interval:
> {code}
> 2016-08-11 08:15:24,511 INFO  AbstractDelegationTokenSecretManager - Starting 
> expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
> {code}
> However, I don't see any delegation token removals, indicated by the 
> following log message:
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager 
> --> removeStoredToken(TokenIdent ident), line 769 [CDH]
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Removing ZKDTSMDelegationToken_"
>   + ident.getSequenceNumber());
> }
> {code}
> Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't 
> get cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13446) Support running isolated unit tests separate from AWS integration tests.

2016-08-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431295#comment-15431295
 ] 

Chris Nauroth commented on HADOOP-13446:


Confirmed that the separation of unit-vs.-integration tests is working as 
expected and all tests pass on both trunk and branch-2 against a bucket in 
US-west-2.  I'd like to commit revision 006 if I can get another +1.

> Support running isolated unit tests separate from AWS integration tests.
> 
>
> Key: HADOOP-13446
> URL: https://issues.apache.org/jira/browse/HADOOP-13446
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13446-HADOOP-13345.001.patch, 
> HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch, 
> HADOOP-13446-branch-2.006.patch, HADOOP-13446.004.patch, 
> HADOOP-13446.005.patch, HADOOP-13446.006.patch
>
>
> Currently, the hadoop-aws module only runs Surefire if AWS credentials have 
> been configured.  This implies that all tests must run integrated with the 
> AWS back-end.  It also means that no tests run as part of ASF pre-commit.  
> This issue proposes for the hadoop-aws module to support running isolated 
> unit tests without integrating with AWS.  This will benefit S3Guard, because 
> we expect the need for isolated mock-based testing to simulate eventual 
> consistency behavior.  It also benefits hadoop-aws in general by allowing 
> pre-commit to do something more valuable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13526) Add detailed logging in KMS log for the authentication failure of proxy user

2016-08-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13526:
---
Assignee: Suraj Acharya

> Add detailed logging in KMS log for the authentication failure of proxy user
> 
>
> Key: HADOOP-13526
> URL: https://issues.apache.org/jira/browse/HADOOP-13526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
> Environment: RHEL
>Reporter: Suraj Acharya
>Assignee: Suraj Acharya
>Priority: Minor
> Attachments: HADOOP-13526.patch
>
>
> Problem :
> User A was not able to write a file to HDFS Encryption Zone. It was resolved 
> by adding proxy user A in kms-site.xml
> However, the logs showed :
> {code}2016-08-10 19:32:08,954 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Request 
> https://vm.example.com:16000/kms/v1/keyversion/aMxsSSKmMEzINTIrKURpFJgHnZxiOvsT9L1nMpbUoGu/_eek?eek_op=decrypt&doAs=userb&user.name=usera
>  user [usera] authenticated{code}
> Possible Solution :
> So the message which says the user was successfully authenticated comes from 
> {{AuthenticationFilter.java}}. However, when the filter on 
> {{DelegationTokenAuthenticationFilter}} is called it hits an exception there 
> and there is no log message there. This leads to the confusion that we have 
> had a success while the exception happens in the next class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13396) Allow pluggable audit loggers in KMS

2016-08-22 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431291#comment-15431291
 ] 

Andrew Wang commented on HADOOP-13396:
--

Splitting the patch sounds good to me. Few more comments:

* Still some lingering JSON references in the two log4j properties files
* Can we make the default value of the config key the classname, rather than 
empty? This way users have an example.
* Should abort if a specified audit logger cannot be configured. Remember that 
the audit logger is important for security, so we don't want to accidentally 
not log in the case of misconfiguration.
* I really like that comment on SimpleKMSAuditLogger. One grammar nit, "and 
will be haunted by the consumer tools / developers", maybe you meant "will 
haunt consumer tools / developers" ?

> Allow pluggable audit loggers in KMS
> 
>
> Key: HADOOP-13396
> URL: https://issues.apache.org/jira/browse/HADOOP-13396
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13396.01.patch, HADOOP-13396.02.patch, 
> HADOOP-13396.03.patch, HADOOP-13396.04.patch, HADOOP-13396.05.patch, 
> HADOOP-13396.06.patch, HADOOP-13396.07.patch, HADOOP-13396.08.patch
>
>
> Currently, KMS audit log is using log4j, to write a text format log.
> We should refactor this, so that people can easily add new format audit logs. 
> The current text format log should be the default, and all of its behavior 
> should remain compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13396) Allow pluggable audit loggers in KMS

2016-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431242#comment-15431242
 ] 

Hadoop QA commented on HADOOP-13396:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-common-project/hadoop-kms: The patch 
generated 2 new + 21 unchanged - 4 fixed = 23 total (was 25) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
6s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824895/HADOOP-13396.08.patch 
|
| JIRA Issue | HADOOP-13396 |
| Optional Tests |  asflicense  mvnsite  unit  xml  compile  javac  javadoc  
mvninstall  findbugs  checkstyle  |
| uname | Linux d7b18cb94a2c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 115ecb5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10333/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-kms.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10333/testReport/ |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10333/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Allow pluggable audit loggers in KMS
> 
>
> Key: HADOOP-13396
> URL: https://issues.apache.org/jira/browse/HADOOP-13396
> Proj

[jira] [Commented] (HADOOP-13532) Fix typo in hadoop_connect_to_hosts error message

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431215#comment-15431215
 ] 

ASF GitHub Bot commented on HADOOP-13532:
-

GitHub user chu11 opened a pull request:

https://github.com/apache/hadoop/pull/120

HADOOP-13532. Fix typo in hadoop_connect_to_hosts error message



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chu11/hadoop HADOOP-13532

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/120.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #120


commit f93b011260b32fb87c81e798f77ca440eb996483
Author: Albert Chu 
Date:   2016-08-22T17:13:42Z

Fix typo in hadoop_connect_to_hosts error message




> Fix typo in hadoop_connect_to_hosts error message
> -
>
> Key: HADOOP-13532
> URL: https://issues.apache.org/jira/browse/HADOOP-13532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Albert Chu
>Priority: Trivial
>
> Recently hit
> {noformat}
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAME were defined. Aborting.
> {noformat}
> Had issues until I realized "HADOOP_WORKER_NAME" is supposed to be 
> "HADOOP_WORKER_NAMES" with an 'S'.
> Github pull request to be sent shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13532) Fix typo in hadoop_connect_to_hosts error message

2016-08-22 Thread Albert Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Albert Chu updated HADOOP-13532:

Description: 
Recently hit

{noformat}
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAME were defined. Aborting.
{noformat}


Took me a bit to realize "HADOOP_WORKER_NAME" is supposed to be 
"HADOOP_WORKER_NAMES" with an 'S'.

Github pull request to be sent shortly.

  was:
Recently hit

{noformat}
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAME were defined. Aborting.
{noformat}


Had issues until I realized "HADOOP_WORKER_NAME" is supposed to be 
"HADOOP_WORKER_NAMES" with an 'S'.

Github pull request to be sent shortly.


> Fix typo in hadoop_connect_to_hosts error message
> -
>
> Key: HADOOP-13532
> URL: https://issues.apache.org/jira/browse/HADOOP-13532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Albert Chu
>Priority: Trivial
>
> Recently hit
> {noformat}
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAME were defined. Aborting.
> {noformat}
> Took me a bit to realize "HADOOP_WORKER_NAME" is supposed to be 
> "HADOOP_WORKER_NAMES" with an 'S'.
> Github pull request to be sent shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13532) Fix typo in hadoop_connect_to_hosts error message

2016-08-22 Thread Albert Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Albert Chu updated HADOOP-13532:

Description: 
Recently hit

{noformat}
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAME were defined. Aborting.
{noformat}


Had issues until I realized "HADOOP_WORKER_NAME" is supposed to be 
"HADOOP_WORKER_NAMES" with an 'S'.

Github pull request to be sent shortly.

  was:
Recently hit

```
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAME were defined. Aborting.
```

Had issues until I realized "HADOOP_WORKER_NAME" is supposed to be 
"HADOOP_WORKER_NAMES" with an 'S'.

Github pull request to be sent shortly.


> Fix typo in hadoop_connect_to_hosts error message
> -
>
> Key: HADOOP-13532
> URL: https://issues.apache.org/jira/browse/HADOOP-13532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Albert Chu
>Priority: Trivial
>
> Recently hit
> {noformat}
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAME were defined. Aborting.
> {noformat}
> Had issues until I realized "HADOOP_WORKER_NAME" is supposed to be 
> "HADOOP_WORKER_NAMES" with an 'S'.
> Github pull request to be sent shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13532) Fix typo in hadoop_connect_to_hosts error message

2016-08-22 Thread Albert Chu (JIRA)
Albert Chu created HADOOP-13532:
---

 Summary: Fix typo in hadoop_connect_to_hosts error message
 Key: HADOOP-13532
 URL: https://issues.apache.org/jira/browse/HADOOP-13532
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0-alpha2
Reporter: Albert Chu
Priority: Trivial


Recently hit

```
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAME were defined. Aborting.
```

Had issues until I realized "HADOOP_WORKER_NAME" is supposed to be 
"HADOOP_WORKER_NAMES" with an 'S'.

Github pull request to be sent shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13396) Allow pluggable audit loggers in KMS

2016-08-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13396:
---
Attachment: HADOOP-13396.08.patch

Patch 8 to fix the styles and added a unit test for the reflection 
initialization.

> Allow pluggable audit loggers in KMS
> 
>
> Key: HADOOP-13396
> URL: https://issues.apache.org/jira/browse/HADOOP-13396
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13396.01.patch, HADOOP-13396.02.patch, 
> HADOOP-13396.03.patch, HADOOP-13396.04.patch, HADOOP-13396.05.patch, 
> HADOOP-13396.06.patch, HADOOP-13396.07.patch, HADOOP-13396.08.patch
>
>
> Currently, KMS audit log is using log4j, to write a text format log.
> We should refactor this, so that people can easily add new format audit logs. 
> The current text format log should be the default, and all of its behavior 
> should remain compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13465) Design Server.Call to be extensible for unified call queue

2016-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431161#comment-15431161
 ] 

Hadoop QA commented on HADOOP-13465:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 11 new + 185 unchanged - 9 fixed = 196 total (was 194) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
8s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824882/HADOOP-13465.patch |
| JIRA Issue | HADOOP-13465 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6ec49b117296 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 115ecb5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10332/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10332/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10332/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Design Server.Call to be extensible for unified call queue
> --
>
> Key: HADOOP-13465
> URL: https://issues.apache.org/jira/browse/

[jira] [Commented] (HADOOP-7363) TestRawLocalFileSystemContract is needed

2016-08-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431126#comment-15431126
 ] 

Anu Engineer commented on HADOOP-7363:
--

[~boky01] Thank you for your comments. I do agree that a deeper cleanup in 
another JIRA is useful. However  I do feel that we should *not* check-in the 
current change as is .. where we catch an exception, log a warn and continue 
running the test. That pattern is really hard for someone to understand. While 
I do see that there are a bunch of things that you mentioned that needs 
cleanup, committing this fragment of code would be confusing to other 
maintainers.

so if you don't mind, let us fix this one issue in this JIRA and get this 
committed or fix all the things you are mentioning in this change list itself, 
either one works.

> TestRawLocalFileSystemContract is needed
> 
>
> Key: HADOOP-7363
> URL: https://issues.apache.org/jira/browse/HADOOP-7363
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Matt Foley
>Assignee: Andras Bokor
> Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, 
> HADOOP-7363.03.patch, HADOOP-7363.04.patch, HADOOP-7363.05.patch
>
>
> FileSystemContractBaseTest is supposed to be run with each concrete 
> FileSystem implementation to insure adherence to the "contract" for 
> FileSystem behavior.  However, currently only HDFS and S3 do so.  
> RawLocalFileSystem, at least, needs to be added. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13465) Design Server.Call to be extensible for unified call queue

2016-08-22 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-13465:
-
Attachment: HADOOP-13465.patch

Forgot to post patch.  No tests due to private internal changes to ipc server.

> Design Server.Call to be extensible for unified call queue
> --
>
> Key: HADOOP-13465
> URL: https://issues.apache.org/jira/browse/HADOOP-13465
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-13465.patch
>
>
> The RPC layer supports QoS but other protocols, ex. webhdfs, are completely 
> unconstrained.  Generalizing {{Server.Call}} to be extensible with simple 
> changes to the handlers will enable unifying the call queue for multiple 
> protocols.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11890) Uber-JIRA: Hadoop should support IPv6

2016-08-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431022#comment-15431022
 ] 

Allen Wittenauer commented on HADOOP-11890:
---

Minor nits:

{code}
 if [[ "${HADOOP_ALLOW_IPV6}" -ne "yes" ]]; then
   export HADOOP_OPTS=${HADOOP_OPTS:-"-Djava.net.preferIPv4Stack=true"}
 else
   export HADOOP_OPTS=${HADOOP_OPTS:-""}
 fi
{code}

* I'd prefer if this wasn't a negative test.
* This should really use true/false instead of yes/no to be consistent with the 
rest of the code.
* Where is HADOOP_ALLOW_IPV6 documented?

> Uber-JIRA: Hadoop should support IPv6
> -
>
> Key: HADOOP-11890
> URL: https://issues.apache.org/jira/browse/HADOOP-11890
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nate Edel
>Assignee: Nate Edel
>  Labels: ipv6
>
> Hadoop currently treats IPv6 as unsupported.  Track related smaller issues to 
> support IPv6.
> (Current case here is mainly HBase on HDFS, so any suggestions about other 
> test cases/workload are really appreciated.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7363) TestRawLocalFileSystemContract is needed

2016-08-22 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430932#comment-15430932
 ] 

Andras Bokor commented on HADOOP-7363:
--

Hi [~anu],

Good point. In addition I would use Assume framework so the test would be 
skipped instead of passed in case of S3.
The problem is that this test is based on JUnit3 but I am not sure if changing 
to JUnit4 is out of scope on this ticket or not. If so I can create another 
clean up ticket to take care with this and other issues (IDE, checkstyle 
warnings, {{testFilesystemIsCaseSensitive}} could also use Assume, and so on).
What do you think?

> TestRawLocalFileSystemContract is needed
> 
>
> Key: HADOOP-7363
> URL: https://issues.apache.org/jira/browse/HADOOP-7363
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Matt Foley
>Assignee: Andras Bokor
> Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, 
> HADOOP-7363.03.patch, HADOOP-7363.04.patch, HADOOP-7363.05.patch
>
>
> FileSystemContractBaseTest is supposed to be run with each concrete 
> FileSystem implementation to insure adherence to the "contract" for 
> FileSystem behavior.  However, currently only HDFS and S3 do so.  
> RawLocalFileSystem, at least, needs to be added. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13527) Add Spark to CallerContext LimitedPrivate scope

2016-08-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430928#comment-15430928
 ] 

Allen Wittenauer commented on HADOOP-13527:
---

Why wasn't this just made public?

> Add Spark to CallerContext LimitedPrivate scope
> ---
>
> Key: HADOOP-13527
> URL: https://issues.apache.org/jira/browse/HADOOP-13527
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Weiqing Yang
>Assignee: Weiqing Yang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13527.000.patch
>
>
> A lots of Spark applications run on Hadoop. Spark will invoke Hadoop caller 
> context APIs to set up its caller contexts in HDFS/Yarn, so Hadoop should add 
> Spark as one of the users in the LimitedPrivate scope.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13446) Support running isolated unit tests separate from AWS integration tests.

2016-08-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430777#comment-15430777
 ] 

Chris Nauroth commented on HADOOP-13446:


Patch 006 for trunk got a full +1 from pre-commit.

Patch 006 for branch-2 had warnings on whitespace and Checkstyle.  I can fix 
whitespace on commit.  I do not plan to fix the remaining Checkstyle warnings.  
They are "no package-info.java" warnings on test code, which aren't 
particularly valuable.  The patch already provides a large overall net 
reduction in Checkstyle warnings from my clean-up work.

Steve, would you please take another look and let me know if you are still +1?

That's a good point about the HowToContribute wiki page.  I'll update that 
after this gets committed.

> Support running isolated unit tests separate from AWS integration tests.
> 
>
> Key: HADOOP-13446
> URL: https://issues.apache.org/jira/browse/HADOOP-13446
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13446-HADOOP-13345.001.patch, 
> HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch, 
> HADOOP-13446-branch-2.006.patch, HADOOP-13446.004.patch, 
> HADOOP-13446.005.patch, HADOOP-13446.006.patch
>
>
> Currently, the hadoop-aws module only runs Surefire if AWS credentials have 
> been configured.  This implies that all tests must run integrated with the 
> AWS back-end.  It also means that no tests run as part of ASF pre-commit.  
> This issue proposes for the hadoop-aws module to support running isolated 
> unit tests without integrating with AWS.  This will benefit S3Guard, because 
> we expect the need for isolated mock-based testing to simulate eventual 
> consistency behavior.  It also benefits hadoop-aws in general by allowing 
> pre-commit to do something more valuable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13475) Adding Append Blob support for WASB

2016-08-22 Thread Raul da Silva Martins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raul da Silva Martins updated HADOOP-13475:
---
Release Note: WASB now supports Azure Append blobs.

> Adding Append Blob support for WASB
> ---
>
> Key: HADOOP-13475
> URL: https://issues.apache.org/jira/browse/HADOOP-13475
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Raul da Silva Martins
>Assignee: Raul da Silva Martins
>Priority: Critical
> Attachments: 0001-Added-Support-for-Azure-AppendBlobs.patch, 
> HADOOP-13475.001.patch, HADOOP-13475.002.patch
>
>
> Currently the WASB implementation of the HDFS interface does not support the 
> utilization of Azure AppendBlobs underneath. As owners of a large scale 
> service who intend to start writing to Append blobs, we need this support in 
> order to be able to keep using our HDI capabilities.
> This JIRA is added to implement Azure AppendBlob support to WASB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13419) Fix javadoc warnings by JDK8 in hadoop-common package

2016-08-22 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430603#comment-15430603
 ] 

Masatake Iwasaki commented on HADOOP-13419:
---

Thanks for the update, [~lewuathe]. You picked javadoc comments outdated by the 
refactoring of HADOOP-13438. Some of them are not synced with the refactored 
code. It would be nice to fix them here.

{noformat}
@@ -2221,8 +2221,8 @@ private void authorizeConnection() throws 
WrappedRpcServerException {
 
 /**
  * Decode the a protobuf from the given input stream 
- * @param builder - Builder of the protobuf to decode
- * @param dis - DataInputStream to read the protobuf
+ * @param message - Builder of the protobuf to decode
+ * @param buffer - DataInputStream to read the protobuf
{noformat}
{{message}} is the class representing the type of message. {{buffer}} is no 
longer DataInputStream but buffer.


> Fix javadoc warnings by JDK8 in hadoop-common package
> -
>
> Key: HADOOP-13419
> URL: https://issues.apache.org/jira/browse/HADOOP-13419
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HADOOP-13419-branch-2.01.patch, 
> HADOOP-13419-branch-2.02.patch, HADOOP-13419.01.patch, HADOOP-13419.02.patch, 
> HADOOP-13419.03.patch
>
>
> Fix compile warning generated after migrate JDK8.
> This is a subtask of HADOOP-13369.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13446) Support running isolated unit tests separate from AWS integration tests.

2016-08-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430361#comment-15430361
 ] 

Steve Loughran commented on HADOOP-13446:
-

OK. The policy doc will need to be updated, presumably we can just say "declare 
which object store instance you ran {{mvn verify}} against".

The extra phases could be useful if we do want to do setup/teardown actions 
which span tests and which you don't want to include as part of the tests. 
Note, however, that it was in Swift teardown() that I discovered that RAX UK 
throttled DELETE requests, so actually including that in test timing runs is 
useful

> Support running isolated unit tests separate from AWS integration tests.
> 
>
> Key: HADOOP-13446
> URL: https://issues.apache.org/jira/browse/HADOOP-13446
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13446-HADOOP-13345.001.patch, 
> HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch, 
> HADOOP-13446-branch-2.006.patch, HADOOP-13446.004.patch, 
> HADOOP-13446.005.patch, HADOOP-13446.006.patch
>
>
> Currently, the hadoop-aws module only runs Surefire if AWS credentials have 
> been configured.  This implies that all tests must run integrated with the 
> AWS back-end.  It also means that no tests run as part of ASF pre-commit.  
> This issue proposes for the hadoop-aws module to support running isolated 
> unit tests without integrating with AWS.  This will benefit S3Guard, because 
> we expect the need for isolated mock-based testing to simulate eventual 
> consistency behavior.  It also benefits hadoop-aws in general by allowing 
> pre-commit to do something more valuable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13531) S3A output streams to share a single LocalDirAllocator for round-robin drive use

2016-08-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430351#comment-15430351
 ] 

Steve Loughran commented on HADOOP-13531:
-

May need to add a new unit test for this if we want to verify round-robin use; 
otherwise, regression testing.

> S3A output streams to share a single LocalDirAllocator for round-robin drive 
> use
> 
>
> Key: HADOOP-13531
> URL: https://issues.apache.org/jira/browse/HADOOP-13531
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> {{S3AOutputStream}} uses {{LocalDirAllocator}} to choose a directory from the 
> comma-separated list of buffers —but it creates a new instance for every 
> output stream. This misses a key point of the allocator: for it to do 
> round-robin allocation, it needs to remember the last disk written to. If a 
> new instance is used for every file: no history.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13531) S3A output streams to share a single LocalDirAllocator for round-robin drive use

2016-08-22 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13531:
---

 Summary: S3A output streams to share a single LocalDirAllocator 
for round-robin drive use
 Key: HADOOP-13531
 URL: https://issues.apache.org/jira/browse/HADOOP-13531
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


{{S3AOutputStream}} uses {{LocalDirAllocator}} to choose a directory from the 
comma-separated list of buffers —but it creates a new instance for every output 
stream. This misses a key point of the allocator: for it to do round-robin 
allocation, it needs to remember the last disk written to. If a new instance is 
used for every file: no history.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13530) Upgrade S3 fs.s3.buffer.dir to support multi directories

2016-08-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430335#comment-15430335
 ] 

Steve Loughran commented on HADOOP-13530:
-

Although we are removing s3, and leaving s3n alone to avoid breaking it, s3a is 
undergoing lots of work. However, this feature isn't needed there; it's config 
option {{fs.s3a.buffer.dir}} already takes a list, using {{LocalDirAllocator}} 
for the same QoS as HDFS itself: round robin allocation, though looking at the 
code there, it's doing it wrong (round-robin isn't being set up right).

For this JIRA, closing as a wontfix. If you are still using ASF s3://, time to 
move to Hadoop 2.7+ and embrace s3a

> Upgrade S3 fs.s3.buffer.dir to support multi directories
> 
>
> Key: HADOOP-13530
> URL: https://issues.apache.org/jira/browse/HADOOP-13530
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.4.0
>Reporter: Adrian Muraru
>Assignee: Ted Malaska
>Priority: Minor
>
> fs.s3.buffer.dir defines the tmp folder where files will be written to before 
> getting sent to S3.  Right now this is limited to a single folder which 
> causes to major issues.
> 1. You need a drive with enough space to store all the tmp files at once
> 2. You are limited to the IO speeds of a single drive
> This is similar to HADOOP-10610 but applies to {{s3://}} hadoop block fs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13530) Upgrade S3 fs.s3.buffer.dir to support multi directories

2016-08-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13530.
-
Resolution: Won't Fix

> Upgrade S3 fs.s3.buffer.dir to support multi directories
> 
>
> Key: HADOOP-13530
> URL: https://issues.apache.org/jira/browse/HADOOP-13530
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.4.0
>Reporter: Adrian Muraru
>Assignee: Ted Malaska
>Priority: Minor
>
> fs.s3.buffer.dir defines the tmp folder where files will be written to before 
> getting sent to S3.  Right now this is limited to a single folder which 
> causes to major issues.
> 1. You need a drive with enough space to store all the tmp files at once
> 2. You are limited to the IO speeds of a single drive
> This is similar to HADOOP-10610 but applies to {{s3://}} hadoop block fs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13530) Upgrade S3 fs.s3.buffer.dir to support multi directories

2016-08-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13530:

Target Version/s:   (was: 2.6.0)

> Upgrade S3 fs.s3.buffer.dir to support multi directories
> 
>
> Key: HADOOP-13530
> URL: https://issues.apache.org/jira/browse/HADOOP-13530
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.4.0
>Reporter: Adrian Muraru
>Assignee: Ted Malaska
>Priority: Minor
>
> fs.s3.buffer.dir defines the tmp folder where files will be written to before 
> getting sent to S3.  Right now this is limited to a single folder which 
> causes to major issues.
> 1. You need a drive with enough space to store all the tmp files at once
> 2. You are limited to the IO speeds of a single drive
> This is similar to HADOOP-10610 but applies to {{s3://}} hadoop block fs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13527) Add Spark to CallerContext LimitedPrivate scope

2016-08-22 Thread Weiqing Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430248#comment-15430248
 ] 

Weiqing Yang commented on HADOOP-13527:
---

Thank you, [~ste...@apache.org] [~cnauroth] [~liuml07]

> Add Spark to CallerContext LimitedPrivate scope
> ---
>
> Key: HADOOP-13527
> URL: https://issues.apache.org/jira/browse/HADOOP-13527
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Weiqing Yang
>Assignee: Weiqing Yang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13527.000.patch
>
>
> A lots of Spark applications run on Hadoop. Spark will invoke Hadoop caller 
> context APIs to set up its caller contexts in HDFS/Yarn, so Hadoop should add 
> Spark as one of the users in the LimitedPrivate scope.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13446) Support running isolated unit tests separate from AWS integration tests.

2016-08-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430244#comment-15430244
 ] 

Hadoop QA commented on HADOOP-13446:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 106 new or modified 
test files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
31s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
34s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 27s{color} 
| {color:red} root-jdk1.8.0_101 with JDK v1.8.0_101 generated 6 new + 851 
unchanged - 0 fixed = 857 total (was 851) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 29s{color} 
| {color:red} root-jdk1.7.0_101 with JDK v1.7.0_101 generated 7 new + 943 
unchanged - 0 fixed = 950 total (was 943) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 25s{color} | {color:orange} root: The patch generated 4 new + 20 unchanged - 
69 fixed = 24 total (was 89) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-project in the patch passe

[jira] [Commented] (HADOOP-13530) Upgrade S3 fs.s3.buffer.dir to support multi directories

2016-08-22 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430225#comment-15430225
 ] 

Adrian Muraru commented on HADOOP-13530:


Actually I see branch-2 starts deprecating s3:// so we might not need this 
feature at all.
HADOOP-12709

> Upgrade S3 fs.s3.buffer.dir to support multi directories
> 
>
> Key: HADOOP-13530
> URL: https://issues.apache.org/jira/browse/HADOOP-13530
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.4.0
>Reporter: Adrian Muraru
>Assignee: Ted Malaska
>Priority: Minor
>
> fs.s3.buffer.dir defines the tmp folder where files will be written to before 
> getting sent to S3.  Right now this is limited to a single folder which 
> causes to major issues.
> 1. You need a drive with enough space to store all the tmp files at once
> 2. You are limited to the IO speeds of a single drive
> This is similar to HADOOP-10610 but applies to {{s3://}} hadoop block fs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org