[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676225#comment-16676225
 ] 

Sunil Govindan commented on HADOOP-15583:
-

Updating Fixed version to 3.2.0

> Stabilize S3A Assumed Role support
> --
>
> Key: HADOOP-15583
> URL: https://issues.apache.org/jira/browse/HADOOP-15583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: HADOOP-15583-001.patch, HADOOP-15583-002.patch, 
> HADOOP-15583-003.patch, HADOOP-15583-004.patch, HADOOP-15583-005.patch
>
>
> started off just on sharing credentials across S3A and S3Guard, but in the 
> process it has grown to becoming one of stabilising the assumed role support 
> so it can be used for more than just testing.
> Was: "S3Guard to get AWS Credential chain from S3AFS; credentials closed() on 
> shutdown"
> h3. Issue: lack of auth chain sharing causes ddb and s3 to get out of sync
> S3Guard builds its DDB auth chain itself, which stops it having to worry 
> about being created standalone vs part of an S3AFS, but it means its 
> authenticators are in a separate chain.
> When you are using short-lived assumed roles or other session credentials 
> updated in the S3A FS authentication chain, you need that same set of 
> credentials picked up by DDB. Otherwise, at best you are doubling load, at 
> worse: the DDB connector may not get refreshed credentials.
> Proposed: {{DynamoDBClientFactory.createDynamoDBClient()}} to take an 
> optional ref to aws credentials. If set: don't create a new set. 
> There's one little complication here: our {{AWSCredentialProviderList}} list 
> is autocloseable; it's close() will go through all children and close them. 
> Apparently the AWS S3 client (And hopefully the DDB client) will close this 
> when they are closed themselves. If DDB  has the same set of credentials as 
> the FS, then there could be trouble if they are closed in one place when the 
> other still wants to use them.
> Solution; have a use count the uses of the credentials list, starting at one: 
> every close() call decrements, and when this hits zero the cleanup is kicked 
> off
> h3. Issue: {{AssumedRoleCredentialProvider}} connector to STS not picking up 
> the s3a connection settings, including proxy.
> h3. issue: we're not using getPassword() to get user/password for proxy 
> binding for STS. Fix: use that and pass down the bucket ref for per-bucket 
> secrets in a JCEKS file.
> h3. Issue; hard to debug what's going wrong :)
> h3. Issue: docs about KMS permissions for SSE-KMS are wrong, and the 
> ITestAssumedRole* tests don't request KMS permissions, so fail in a bucket 
> when the base s3 FS is using SSE-KMS. KMS permissions need to be included in 
> generated profiles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-08-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16574341#comment-16574341
 ] 

Hudson commented on HADOOP-15583:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14734 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14734/])
HADOOP-15583. Stabilize S3A Assumed Role support. Contributed by Steve (stevel: 
rev da9a39eed138210de29b59b90c449b28da1c04f9)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumedRoleCommitOperations.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/RoleTestUtils.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentS3ClientFactory.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/NoAuthWithAWSException.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/STSClientFactory.java
* (edit) 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/assumed_roles.md
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AssumedRoleCredentialProvider.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/AWSCredentialProviderList.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardConcurrentOps.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ClientFactory.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBClientFactory.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3ClientFactory.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ATemporaryCredentials.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardWriteBack.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/RolePolicies.java
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/RoleModel.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Stabilize S3A Assumed Role support
> --
>
> Key: HADOOP-15583
> URL: https://issues.apache.org/jira/browse/HADOOP-15583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15583-001.patch, HADOOP-15583-002.patch, 
> HADOOP-15583-003.patch, HADOOP-15583-004.patch, HADOOP-15583-005.patch
>
>
> started off just on sharing credentials across S3A and S3Guard, but in the 
> process it has grown to becoming one of stabilising the assumed role support 
> so it can be used for more than just testing.
> Was: "S3Guard to get AWS Credential chain from S3AFS; credentials closed() on 
> shutdown"
> h3. Issue: lack of auth chain sharing causes ddb and s3 to get out of sync
> S3Guard builds its DDB auth chain itself, which stops it having to worry 
> about being created standalone vs part of an S3AFS, but it means its 
> authenticators are in a separate chain.
> When you are using short-lived assumed roles or other session credentials 
> updated in the S3A FS authentication chain, you need that same set of 
> credentials picked up by DDB. Otherwise, at best you are doubling load, at 
> worse: the DDB connector may not get refreshed credentials.
> Proposed: {{DynamoDBClientFactory.createDynamoDBClient()}} to take an 
> optional ref to 

[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-08-08 Thread Larry McCay (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16574202#comment-16574202
 ] 

Larry McCay commented on HADOOP-15583:
--

After complete review, this LGTM once you address the previous minor things 
that I reported you have my +1.

> Stabilize S3A Assumed Role support
> --
>
> Key: HADOOP-15583
> URL: https://issues.apache.org/jira/browse/HADOOP-15583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15583-001.patch, HADOOP-15583-002.patch, 
> HADOOP-15583-003.patch, HADOOP-15583-004.patch, HADOOP-15583-005.patch
>
>
> started off just on sharing credentials across S3A and S3Guard, but in the 
> process it has grown to becoming one of stabilising the assumed role support 
> so it can be used for more than just testing.
> Was: "S3Guard to get AWS Credential chain from S3AFS; credentials closed() on 
> shutdown"
> h3. Issue: lack of auth chain sharing causes ddb and s3 to get out of sync
> S3Guard builds its DDB auth chain itself, which stops it having to worry 
> about being created standalone vs part of an S3AFS, but it means its 
> authenticators are in a separate chain.
> When you are using short-lived assumed roles or other session credentials 
> updated in the S3A FS authentication chain, you need that same set of 
> credentials picked up by DDB. Otherwise, at best you are doubling load, at 
> worse: the DDB connector may not get refreshed credentials.
> Proposed: {{DynamoDBClientFactory.createDynamoDBClient()}} to take an 
> optional ref to aws credentials. If set: don't create a new set. 
> There's one little complication here: our {{AWSCredentialProviderList}} list 
> is autocloseable; it's close() will go through all children and close them. 
> Apparently the AWS S3 client (And hopefully the DDB client) will close this 
> when they are closed themselves. If DDB  has the same set of credentials as 
> the FS, then there could be trouble if they are closed in one place when the 
> other still wants to use them.
> Solution; have a use count the uses of the credentials list, starting at one: 
> every close() call decrements, and when this hits zero the cleanup is kicked 
> off
> h3. Issue: {{AssumedRoleCredentialProvider}} connector to STS not picking up 
> the s3a connection settings, including proxy.
> h3. issue: we're not using getPassword() to get user/password for proxy 
> binding for STS. Fix: use that and pass down the bucket ref for per-bucket 
> secrets in a JCEKS file.
> h3. Issue; hard to debug what's going wrong :)
> h3. Issue: docs about KMS permissions for SSE-KMS are wrong, and the 
> ITestAssumedRole* tests don't request KMS permissions, so fail in a bucket 
> when the base s3 FS is using SSE-KMS. KMS permissions need to be included in 
> generated profiles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-08-06 Thread Larry McCay (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16571048#comment-16571048
 ] 

Larry McCay commented on HADOOP-15583:
--

Couple more nits...

* AWSCredentialProviderList.java -

// do this outsize the synchronized block.

s/outsize/outside/

* S3AUtils - 

result= new AWSServiceIOException(message, ddbException);

s/= / = /

 

> Stabilize S3A Assumed Role support
> --
>
> Key: HADOOP-15583
> URL: https://issues.apache.org/jira/browse/HADOOP-15583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15583-001.patch, HADOOP-15583-002.patch, 
> HADOOP-15583-003.patch, HADOOP-15583-004.patch, HADOOP-15583-005.patch
>
>
> started off just on sharing credentials across S3A and S3Guard, but in the 
> process it has grown to becoming one of stabilising the assumed role support 
> so it can be used for more than just testing.
> Was: "S3Guard to get AWS Credential chain from S3AFS; credentials closed() on 
> shutdown"
> h3. Issue: lack of auth chain sharing causes ddb and s3 to get out of sync
> S3Guard builds its DDB auth chain itself, which stops it having to worry 
> about being created standalone vs part of an S3AFS, but it means its 
> authenticators are in a separate chain.
> When you are using short-lived assumed roles or other session credentials 
> updated in the S3A FS authentication chain, you need that same set of 
> credentials picked up by DDB. Otherwise, at best you are doubling load, at 
> worse: the DDB connector may not get refreshed credentials.
> Proposed: {{DynamoDBClientFactory.createDynamoDBClient()}} to take an 
> optional ref to aws credentials. If set: don't create a new set. 
> There's one little complication here: our {{AWSCredentialProviderList}} list 
> is autocloseable; it's close() will go through all children and close them. 
> Apparently the AWS S3 client (And hopefully the DDB client) will close this 
> when they are closed themselves. If DDB  has the same set of credentials as 
> the FS, then there could be trouble if they are closed in one place when the 
> other still wants to use them.
> Solution; have a use count the uses of the credentials list, starting at one: 
> every close() call decrements, and when this hits zero the cleanup is kicked 
> off
> h3. Issue: {{AssumedRoleCredentialProvider}} connector to STS not picking up 
> the s3a connection settings, including proxy.
> h3. issue: we're not using getPassword() to get user/password for proxy 
> binding for STS. Fix: use that and pass down the bucket ref for per-bucket 
> secrets in a JCEKS file.
> h3. Issue; hard to debug what's going wrong :)
> h3. Issue: docs about KMS permissions for SSE-KMS are wrong, and the 
> ITestAssumedRole* tests don't request KMS permissions, so fail in a bucket 
> when the base s3 FS is using SSE-KMS. KMS permissions need to be included in 
> generated profiles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-08-06 Thread Larry McCay (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570977#comment-16570977
 ] 

Larry McCay commented on HADOOP-15583:
--

Hi [~ste...@apache.org] - I thought STS was Security Token Service - no?

I am still in the process of reviewing it all in detail but wanted to ask that 
up front.

> Stabilize S3A Assumed Role support
> --
>
> Key: HADOOP-15583
> URL: https://issues.apache.org/jira/browse/HADOOP-15583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15583-001.patch, HADOOP-15583-002.patch, 
> HADOOP-15583-003.patch, HADOOP-15583-004.patch, HADOOP-15583-005.patch
>
>
> started off just on sharing credentials across S3A and S3Guard, but in the 
> process it has grown to becoming one of stabilising the assumed role support 
> so it can be used for more than just testing.
> Was: "S3Guard to get AWS Credential chain from S3AFS; credentials closed() on 
> shutdown"
> h3. Issue: lack of auth chain sharing causes ddb and s3 to get out of sync
> S3Guard builds its DDB auth chain itself, which stops it having to worry 
> about being created standalone vs part of an S3AFS, but it means its 
> authenticators are in a separate chain.
> When you are using short-lived assumed roles or other session credentials 
> updated in the S3A FS authentication chain, you need that same set of 
> credentials picked up by DDB. Otherwise, at best you are doubling load, at 
> worse: the DDB connector may not get refreshed credentials.
> Proposed: {{DynamoDBClientFactory.createDynamoDBClient()}} to take an 
> optional ref to aws credentials. If set: don't create a new set. 
> There's one little complication here: our {{AWSCredentialProviderList}} list 
> is autocloseable; it's close() will go through all children and close them. 
> Apparently the AWS S3 client (And hopefully the DDB client) will close this 
> when they are closed themselves. If DDB  has the same set of credentials as 
> the FS, then there could be trouble if they are closed in one place when the 
> other still wants to use them.
> Solution; have a use count the uses of the credentials list, starting at one: 
> every close() call decrements, and when this hits zero the cleanup is kicked 
> off
> h3. Issue: {{AssumedRoleCredentialProvider}} connector to STS not picking up 
> the s3a connection settings, including proxy.
> h3. issue: we're not using getPassword() to get user/password for proxy 
> binding for STS. Fix: use that and pass down the bucket ref for per-bucket 
> secrets in a JCEKS file.
> h3. Issue; hard to debug what's going wrong :)
> h3. Issue: docs about KMS permissions for SSE-KMS are wrong, and the 
> ITestAssumedRole* tests don't request KMS permissions, so fail in a bucket 
> when the base s3 FS is using SSE-KMS. KMS permissions need to be included in 
> generated profiles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-08-06 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570914#comment-16570914
 ] 

Steve Loughran commented on HADOOP-15583:
-

The javac warning is about deprecation, which I'm not going to fix. 
This patch is ready for review/commit

> Stabilize S3A Assumed Role support
> --
>
> Key: HADOOP-15583
> URL: https://issues.apache.org/jira/browse/HADOOP-15583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15583-001.patch, HADOOP-15583-002.patch, 
> HADOOP-15583-003.patch, HADOOP-15583-004.patch, HADOOP-15583-005.patch
>
>
> started off just on sharing credentials across S3A and S3Guard, but in the 
> process it has grown to becoming one of stabilising the assumed role support 
> so it can be used for more than just testing.
> Was: "S3Guard to get AWS Credential chain from S3AFS; credentials closed() on 
> shutdown"
> h3. Issue: lack of auth chain sharing causes ddb and s3 to get out of sync
> S3Guard builds its DDB auth chain itself, which stops it having to worry 
> about being created standalone vs part of an S3AFS, but it means its 
> authenticators are in a separate chain.
> When you are using short-lived assumed roles or other session credentials 
> updated in the S3A FS authentication chain, you need that same set of 
> credentials picked up by DDB. Otherwise, at best you are doubling load, at 
> worse: the DDB connector may not get refreshed credentials.
> Proposed: {{DynamoDBClientFactory.createDynamoDBClient()}} to take an 
> optional ref to aws credentials. If set: don't create a new set. 
> There's one little complication here: our {{AWSCredentialProviderList}} list 
> is autocloseable; it's close() will go through all children and close them. 
> Apparently the AWS S3 client (And hopefully the DDB client) will close this 
> when they are closed themselves. If DDB  has the same set of credentials as 
> the FS, then there could be trouble if they are closed in one place when the 
> other still wants to use them.
> Solution; have a use count the uses of the credentials list, starting at one: 
> every close() call decrements, and when this hits zero the cleanup is kicked 
> off
> h3. Issue: {{AssumedRoleCredentialProvider}} connector to STS not picking up 
> the s3a connection settings, including proxy.
> h3. issue: we're not using getPassword() to get user/password for proxy 
> binding for STS. Fix: use that and pass down the bucket ref for per-bucket 
> secrets in a JCEKS file.
> h3. Issue; hard to debug what's going wrong :)
> h3. Issue: docs about KMS permissions for SSE-KMS are wrong, and the 
> ITestAssumedRole* tests don't request KMS permissions, so fail in a bucket 
> when the base s3 FS is using SSE-KMS. KMS permissions need to be included in 
> generated profiles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-08-02 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567698#comment-16567698
 ] 

genericqa commented on HADOOP-15583:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
35s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 23m 
26s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 30m  2s{color} 
| {color:red} root generated 184 new + 1280 unchanged - 0 fixed = 1464 total 
(was 1280) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 12 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
5s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
34s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-15583 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12934184/HADOOP-15583-005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux da649f590b98 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git 

[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-08-01 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566006#comment-16566006
 ] 

Steve Loughran commented on HADOOP-15583:
-

Javac:
{code}
[WARNING] 
/testptch/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ATemporaryCredentials.java:[94,15]
 [deprecation] setEndpoint(String) in AWSSecurityTokenService has been 
deprecated
{code}

The findbugs warnings are odd as they should be raised iff the reference is to 
an array which isn't final. Maybe it's because even though the array is final, 
the elements in it are not, but in that case: why are only these two faulted? 
Because they are new?

> Stabilize S3A Assumed Role support
> --
>
> Key: HADOOP-15583
> URL: https://issues.apache.org/jira/browse/HADOOP-15583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15583-001.patch, HADOOP-15583-002.patch, 
> HADOOP-15583-003.patch, HADOOP-15583-004.patch
>
>
> started off just on sharing credentials across S3A and S3Guard, but in the 
> process it has grown to becoming one of stabilising the assumed role support 
> so it can be used for more than just testing.
> Was: "S3Guard to get AWS Credential chain from S3AFS; credentials closed() on 
> shutdown"
> h3. Issue: lack of auth chain sharing causes ddb and s3 to get out of sync
> S3Guard builds its DDB auth chain itself, which stops it having to worry 
> about being created standalone vs part of an S3AFS, but it means its 
> authenticators are in a separate chain.
> When you are using short-lived assumed roles or other session credentials 
> updated in the S3A FS authentication chain, you need that same set of 
> credentials picked up by DDB. Otherwise, at best you are doubling load, at 
> worse: the DDB connector may not get refreshed credentials.
> Proposed: {{DynamoDBClientFactory.createDynamoDBClient()}} to take an 
> optional ref to aws credentials. If set: don't create a new set. 
> There's one little complication here: our {{AWSCredentialProviderList}} list 
> is autocloseable; it's close() will go through all children and close them. 
> Apparently the AWS S3 client (And hopefully the DDB client) will close this 
> when they are closed themselves. If DDB  has the same set of credentials as 
> the FS, then there could be trouble if they are closed in one place when the 
> other still wants to use them.
> Solution; have a use count the uses of the credentials list, starting at one: 
> every close() call decrements, and when this hits zero the cleanup is kicked 
> off
> h3. Issue: {{AssumedRoleCredentialProvider}} connector to STS not picking up 
> the s3a connection settings, including proxy.
> h3. issue: we're not using getPassword() to get user/password for proxy 
> binding for STS. Fix: use that and pass down the bucket ref for per-bucket 
> secrets in a JCEKS file.
> h3. Issue; hard to debug what's going wrong :)
> h3. Issue: docs about KMS permissions for SSE-KMS are wrong, and the 
> ITestAssumedRole* tests don't request KMS permissions, so fail in a bucket 
> when the base s3 FS is using SSE-KMS. KMS permissions need to be included in 
> generated profiles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-07-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562689#comment-16562689
 ] 

genericqa commented on HADOOP-15583:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 29m 26s{color} 
| {color:red} root generated 1 new + 1463 unchanged - 5 fixed = 1464 total (was 
1468) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 12 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
58s{color} | {color:red} hadoop-tools/hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
14s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
20s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  org.apache.hadoop.fs.s3a.auth.RolePolicies.KMS_KEY_READ should be package 
protected  At RolePolicies.java: At RolePolicies.java:[line 65] |
|  |  org.apache.hadoop.fs.s3a.auth.RolePolicies.KMS_KEY_RW should be package 
protected  At RolePolicies.java: At RolePolicies.java:[line 58] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-15583 |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-07-30 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562551#comment-16562551
 ] 

Steve Loughran commented on HADOOP-15583:
-

HADOOP-15583: patch 004
* AssumedRoleCredentialProvider retries sts credential fetch using the main S3A 
retry attempt/sleep values.
* ITestAssumedRole tests set up roles to only use S3Guard client options; this 
makes sure we are correct about what perms are needed.
* ITestAssumedRole adds test for asking for a session > 1h; we can use this to 
see whether later SDKs remove/expand this check (HADOOP-15642)
* Review docs, add one more stack trace and try to trim those stacks down.

The AssumedRoleCredentialProvider retry code has an implicit dependency on 
HADOOP-15426, which expands the logic for detecting a throttle exception by 
asking the AWS SDK for it's opinions. I don't see any easy way to test this 
code except by overloading the STS itself.



> Stabilize S3A Assumed Role support
> --
>
> Key: HADOOP-15583
> URL: https://issues.apache.org/jira/browse/HADOOP-15583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15583-001.patch, HADOOP-15583-002.patch, 
> HADOOP-15583-003.patch, HADOOP-15583-004.patch
>
>
> started off just on sharing credentials across S3A and S3Guard, but in the 
> process it has grown to becoming one of stabilising the assumed role support 
> so it can be used for more than just testing.
> Was: "S3Guard to get AWS Credential chain from S3AFS; credentials closed() on 
> shutdown"
> h3. Issue: lack of auth chain sharing causes ddb and s3 to get out of sync
> S3Guard builds its DDB auth chain itself, which stops it having to worry 
> about being created standalone vs part of an S3AFS, but it means its 
> authenticators are in a separate chain.
> When you are using short-lived assumed roles or other session credentials 
> updated in the S3A FS authentication chain, you need that same set of 
> credentials picked up by DDB. Otherwise, at best you are doubling load, at 
> worse: the DDB connector may not get refreshed credentials.
> Proposed: {{DynamoDBClientFactory.createDynamoDBClient()}} to take an 
> optional ref to aws credentials. If set: don't create a new set. 
> There's one little complication here: our {{AWSCredentialProviderList}} list 
> is autocloseable; it's close() will go through all children and close them. 
> Apparently the AWS S3 client (And hopefully the DDB client) will close this 
> when they are closed themselves. If DDB  has the same set of credentials as 
> the FS, then there could be trouble if they are closed in one place when the 
> other still wants to use them.
> Solution; have a use count the uses of the credentials list, starting at one: 
> every close() call decrements, and when this hits zero the cleanup is kicked 
> off
> h3. Issue: {{AssumedRoleCredentialProvider}} connector to STS not picking up 
> the s3a connection settings, including proxy.
> h3. issue: we're not using getPassword() to get user/password for proxy 
> binding for STS. Fix: use that and pass down the bucket ref for per-bucket 
> secrets in a JCEKS file.
> h3. Issue; hard to debug what's going wrong :)
> h3. Issue: docs about KMS permissions for SSE-KMS are wrong, and the 
> ITestAssumedRole* tests don't request KMS permissions, so fail in a bucket 
> when the base s3 FS is using SSE-KMS. KMS permissions need to be included in 
> generated profiles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-07-26 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558790#comment-16558790
 ] 

Steve Loughran commented on HADOOP-15583:
-

+ AssumedRoleCredentialProvider to retry on throttling errors asking for roles. 
But for that, we'd need to know the error for attempting that. Assume 
{{com.amazonaws.retry.RetryUtils.isThrottlingException()}} can do that

> Stabilize S3A Assumed Role support
> --
>
> Key: HADOOP-15583
> URL: https://issues.apache.org/jira/browse/HADOOP-15583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15583-001.patch, HADOOP-15583-002.patch, 
> HADOOP-15583-003.patch
>
>
> started off just on sharing credentials across S3A and S3Guard, but in the 
> process it has grown to becoming one of stabilising the assumed role support 
> so it can be used for more than just testing.
> Was: "S3Guard to get AWS Credential chain from S3AFS; credentials closed() on 
> shutdown"
> h3. Issue: lack of auth chain sharing causes ddb and s3 to get out of sync
> S3Guard builds its DDB auth chain itself, which stops it having to worry 
> about being created standalone vs part of an S3AFS, but it means its 
> authenticators are in a separate chain.
> When you are using short-lived assumed roles or other session credentials 
> updated in the S3A FS authentication chain, you need that same set of 
> credentials picked up by DDB. Otherwise, at best you are doubling load, at 
> worse: the DDB connector may not get refreshed credentials.
> Proposed: {{DynamoDBClientFactory.createDynamoDBClient()}} to take an 
> optional ref to aws credentials. If set: don't create a new set. 
> There's one little complication here: our {{AWSCredentialProviderList}} list 
> is autocloseable; it's close() will go through all children and close them. 
> Apparently the AWS S3 client (And hopefully the DDB client) will close this 
> when they are closed themselves. If DDB  has the same set of credentials as 
> the FS, then there could be trouble if they are closed in one place when the 
> other still wants to use them.
> Solution; have a use count the uses of the credentials list, starting at one: 
> every close() call decrements, and when this hits zero the cleanup is kicked 
> off
> h3. Issue: {{AssumedRoleCredentialProvider}} connector to STS not picking up 
> the s3a connection settings, including proxy.
> h3. issue: we're not using getPassword() to get user/password for proxy 
> binding for STS. Fix: use that and pass down the bucket ref for per-bucket 
> secrets in a JCEKS file.
> h3. Issue; hard to debug what's going wrong :)
> h3. Issue: docs about KMS permissions for SSE-KMS are wrong, and the 
> ITestAssumedRole* tests don't request KMS permissions, so fail in a bucket 
> when the base s3 FS is using SSE-KMS. KMS permissions need to be included in 
> generated profiles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-07-24 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16555083#comment-16555083
 ] 

genericqa commented on HADOOP-15583:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 27m 44s{color} 
| {color:red} root generated 1 new + 1463 unchanged - 5 fixed = 1464 total (was 
1468) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 10 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
25s{color} | {color:red} hadoop-tools/hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
11s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
38s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  org.apache.hadoop.fs.s3a.auth.RolePolicies.KMS_KEY_READ should be package 
protected  At RolePolicies.java: At RolePolicies.java:[line 49] |
|  |  org.apache.hadoop.fs.s3a.auth.RolePolicies.KMS_KEY_RW should be package 
protected  At RolePolicies.java: At RolePolicies.java:[line 44] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HADOOP-15583 |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-07-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16554963#comment-16554963
 ] 

Steve Loughran commented on HADOOP-15583:
-

Patch 003
 * assumed role tests work properly when the unrestricted client is using 
SSE-KMS, that is: they set the permissions up so that restricted RW has the KMS 
RW perms, and restricted RO only has the decrypt perms.
 * contains HADOOP-15627. S3A ITests failing if bucket explicitly set to 
s3guard+DDB
 * better handling and testing of getBucketLocation permissions issues: DDB 
metastore states what the problem is and ITestAssumeRole tests all enable this 
permission
 * Docs cover DDB, getBucketLocation and kms permissions needed

Testing: S3 US-west. Some DDB overload issues as discussed in HADOOP-15426, and 
I rediscovered the "encryption tests fail if you mandate S3-KMS", but 
otherwise: fine

> Stabilize S3A Assumed Role support
> --
>
> Key: HADOOP-15583
> URL: https://issues.apache.org/jira/browse/HADOOP-15583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15583-001.patch, HADOOP-15583-002.patch, 
> HADOOP-15583-003.patch
>
>
> started off just on sharing credentials across S3A and S3Guard, but in the 
> process it has grown to becoming one of stabilising the assumed role support 
> so it can be used for more than just testing.
> Was: "S3Guard to get AWS Credential chain from S3AFS; credentials closed() on 
> shutdown"
> h3. Issue: lack of auth chain sharing causes ddb and s3 to get out of sync
> S3Guard builds its DDB auth chain itself, which stops it having to worry 
> about being created standalone vs part of an S3AFS, but it means its 
> authenticators are in a separate chain.
> When you are using short-lived assumed roles or other session credentials 
> updated in the S3A FS authentication chain, you need that same set of 
> credentials picked up by DDB. Otherwise, at best you are doubling load, at 
> worse: the DDB connector may not get refreshed credentials.
> Proposed: {{DynamoDBClientFactory.createDynamoDBClient()}} to take an 
> optional ref to aws credentials. If set: don't create a new set. 
> There's one little complication here: our {{AWSCredentialProviderList}} list 
> is autocloseable; it's close() will go through all children and close them. 
> Apparently the AWS S3 client (And hopefully the DDB client) will close this 
> when they are closed themselves. If DDB  has the same set of credentials as 
> the FS, then there could be trouble if they are closed in one place when the 
> other still wants to use them.
> Solution; have a use count the uses of the credentials list, starting at one: 
> every close() call decrements, and when this hits zero the cleanup is kicked 
> off
> h3. Issue: {{AssumedRoleCredentialProvider}} connector to STS not picking up 
> the s3a connection settings, including proxy.
> h3. issue: we're not using getPassword() to get user/password for proxy 
> binding for STS. Fix: use that and pass down the bucket ref for per-bucket 
> secrets in a JCEKS file.
> h3. Issue; hard to debug what's going wrong :)
> h3. Issue: docs about KMS permissions for SSE-KMS are wrong, and the 
> ITestAssumedRole* tests don't request KMS permissions, so fail in a bucket 
> when the base s3 FS is using SSE-KMS. KMS permissions need to be included in 
> generated profiles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-07-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16554801#comment-16554801
 ] 

Steve Loughran commented on HADOOP-15583:
-

Stack if your role doesn't have access to the SSE-KMS key used in the test 
configuration. Tests need to make sure assume role created has valid KMS access

{code}

[ERROR] 
testPartialDeleteSingleDelete(org.apache.hadoop.fs.s3a.auth.ITestAssumeRole)  
Time elapsed: 1.888 s  <<< FAILURE!
java.lang.AssertionError
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598)
at 
java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677)
at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:735)
at 
java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160)
at 
java.util.stream.ForEachOps$ForEachOp$OfInt.evaluateParallel(ForEachOps.java:189)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
at java.util.stream.IntPipeline.forEach(IntPipeline.java:404)
at java.util.stream.IntPipeline$Head.forEach(IntPipeline.java:560)
at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.touchFiles(ITestAssumeRole.java:608)
at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.executePartialDelete(ITestAssumeRole.java:776)
at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.testPartialDeleteSingleDelete(ITestAssumeRole.java:748)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.lang.AssertionError: java.nio.file.AccessDeniedException: 
fork-0001/test/testPartialDeleteSingleDelete/file-10: put on 
fork-0001/test/testPartialDeleteSingleDelete/file-10: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
0EFFB121C9AB6F2D; S3 Extended Request ID: 
vbk+mO9d1DS/Rs6HpacmadZHC/M9zlioTGoqATbkg7bfd8DLKuMBjUm1OytRFMdPJSbvl85qbSQ=), 
S3 Extended Request ID: 
vbk+mO9d1DS/Rs6HpacmadZHC/M9zlioTGoqATbkg7bfd8DLKuMBjUm1OytRFMdPJSbvl85qbSQ=:AccessDenied
at org.apache.hadoop.test.LambdaTestUtils.eval(LambdaTestUtils.java:644)
at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.lambda$touchFiles$11(ITestAssumeRole.java:609)
at 
java.util.stream.ForEachOps$ForEachOp$OfInt.accept(ForEachOps.java:205)
at 
java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:114)
at java.util.Spliterator$OfInt.forEachRemaining(Spliterator.java:693)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at 
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: java.nio.file.AccessDeniedException: 
fork-0001/test/testPartialDeleteSingleDelete/file-10: put on 
fork-0001/test/testPartialDeleteSingleDelete/file-10: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
0EFFB121C9AB6F2D; S3 Extended Request ID: 
vbk+mO9d1DS/Rs6HpacmadZHC/M9zlioTGoqATbkg7bfd8DLKuMBjUm1OytRFMdPJSbvl85qbSQ=), 
S3 Extended 

[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-07-20 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551489#comment-16551489
 ] 

Steve Loughran commented on HADOOP-15583:
-

New stack trace
{code}
[ERROR] 
testAssumeRoleRestrictedPolicyFS(org.apache.hadoop.fs.s3a.auth.ITestAssumeRole) 
 Time elapsed: 6.591 s  <<< ERROR!
java.nio.file.AccessDeniedException: S3 client role lacks permission 
s3:GetBucketLocation for s3a://hwdev-steve-ireland-new
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:303)
at 
org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:99)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:339)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.testAssumeRoleRestrictedPolicyFS(ITestAssumeRole.java:311)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.nio.file.AccessDeniedException: hwdev-steve-ireland-new: 
getBucketLocation() on hwdev-steve-ireland-new: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
D7F8EC16C760DCF7; S3 Extended Request ID: 
eInD8LoB0468wPBwfBNSXW5dLqCvA1EUrscqfhPxhAJsBMMdW6FzwanIs7BhAdrOxYebZefR0jw=), 
S3 Extended Request ID: 
eInD8LoB0468wPBwfBNSXW5dLqCvA1EUrscqfhPxhAJsBMMdW6FzwanIs7BhAdrOxYebZefR0jw=:AccessDenied
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:229)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:231)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:528)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:516)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:297)
... 18 more
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied 
(Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
D7F8EC16C760DCF7; S3 Extended Request ID: 
eInD8LoB0468wPBwfBNSXW5dLqCvA1EUrscqfhPxhAJsBMMdW6FzwanIs7BhAdrOxYebZefR0jw=), 
S3 Extended Request ID: 
eInD8LoB0468wPBwfBNSXW5dLqCvA1EUrscqfhPxhAJsBMMdW6FzwanIs7BhAdrOxYebZefR0jw=
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4325)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4272)
at 

[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-07-20 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551488#comment-16551488
 ] 

Steve Loughran commented on HADOOP-15583:
-

If you have S3Guard enabled but you don't have s3:getBucketLocation, the first 
S3 interaction to call will be the getBucketLocation operation used to find the 
region for the S3Guard table; this error message can be improved
{code}
[ERROR] 
testAssumeRoleRestrictedPolicyFS(org.apache.hadoop.fs.s3a.auth.ITestAssumeRole) 
 Time elapsed: 6.396 s  <<< ERROR!
java.nio.file.AccessDeniedException: hwdev-steve-ireland-new: 
getBucketLocation() on hwdev-steve-ireland-new: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
A8B11AD20AE20391; S3 Extended Request ID: 
+SSWoeRU5/mwTCs7WnWScs52x8aSJbVfhQ/lVM3fXb5G/0EU0T4BdRRMgPJImufrd/jDDl8/uWk=), 
S3 Extended Request ID: 
+SSWoeRU5/mwTCs7WnWScs52x8aSJbVfhQ/lVM3fXb5G/0EU0T4BdRRMgPJImufrd/jDDl8/uWk=:AccessDenied
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:229)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:231)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:528)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:516)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:294)
at 
org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:99)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:339)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.testAssumeRoleRestrictedPolicyFS(ITestAssumeRole.java:311)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied 
(Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
A8B11AD20AE20391; S3 Extended Request ID: 
+SSWoeRU5/mwTCs7WnWScs52x8aSJbVfhQ/lVM3fXb5G/0EU0T4BdRRMgPJImufrd/jDDl8/uWk=), 
S3 Extended Request ID: 
+SSWoeRU5/mwTCs7WnWScs52x8aSJbVfhQ/lVM3fXb5G/0EU0T4BdRRMgPJImufrd/jDDl8/uWk=
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4325)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4272)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4266)
at 

[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-07-20 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551447#comment-16551447
 ] 

Steve Loughran commented on HADOOP-15583:
-

FWIW, access denied exceptions in Dynamo come back as 500 + error text, *not a 
4xx error*.

This'll be handled in this patch, mapping it to AccessDeniedException
{code}
[ERROR] testRestrictedRename(org.apache.hadoop.fs.s3a.auth.ITestAssumeRole)  
Time elapsed: 9.933 s  <<< ERROR!
java.nio.file.AccessDeniedException: hwdev-steve-ireland-new: 
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: User: 
arn:aws:sts::980678866538:assumed-role/stevel-s3-restricted/valid is not 
authorized to perform: dynamodb:DescribeTable on resource: 
arn:aws:dynamodb:eu-west-1:980678866538:table/hwdev-steve-ireland-new (Service: 
AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request 
ID: ORVTQPTD262GPBK7ILH83BE4D7VV4KQNSO5AEMVJF66Q9ASUAAJG)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:406)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:192)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:971)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:312)
at 
org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:99)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:339)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.executeRestrictedRename(ITestAssumeRole.java:471)
at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.testRestrictedRename(ITestAssumeRole.java:437)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: 
User: arn:aws:sts::980678866538:assumed-role/stevel-s3-restricted/valid is not 
authorized to perform: dynamodb:DescribeTable on resource: 
arn:aws:dynamodb:eu-west-1:980678866538:table/hwdev-steve-ireland-new (Service: 
AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request 
ID: ORVTQPTD262GPBK7ILH83BE4D7VV4KQNSO5AEMVJF66Q9ASUAAJG)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2925)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:2901)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeDescribeTable(AmazonDynamoDBClient.java:1515)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:1491)
at 
com.amazonaws.services.dynamodbv2.document.Table.describe(Table.java:137)
at 

[jira] [Commented] (HADOOP-15583) Stabilize S3A Assumed Role support

2018-07-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541668#comment-16541668
 ] 

Steve Loughran commented on HADOOP-15583:
-

new DDB stack trace in {{ITestS3GuardConcurrentOps}} testing this; connect 
timeout = 15000. Assume unrelated and just a function of parallel load of a 
test run with {{-Dparallel-tests -DtestsThreadCount=6 -Ds3guard -Ddynamodb }} 
and network delays

{code}
ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 52.5 s 
<<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps
[ERROR] 
testConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps)
  Time elapsed: 52.384 s  <<< FAILURE!
java.lang.AssertionError: 1/16 threads threw exceptions while initializing on 
iteration 2
at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:172)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.lang.IllegalArgumentException: Table 
testConcurrentTableCreations578159635 did not transition into ACTIVE state.
at 
com.amazonaws.services.dynamodbv2.document.Table.waitForActive(Table.java:489)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.waitForTableActive(DynamoDBMetadataStore.java:1040)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:931)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:359)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps$2.call(ITestS3GuardConcurrentOps.java:145)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps$2.call(ITestS3GuardConcurrentOps.java:136)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: 
Connect to dynamodb.eu-west-1.amazonaws.com:443 
[dynamodb.eu-west-1.amazonaws.com/52.94.25.90] failed: connect timed out
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1114)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1064)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2925)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:2901)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeDescribeTable(AmazonDynamoDBClient.java:1515)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:1491)
at 
com.amazonaws.services.dynamodbv2.waiters.DescribeTableFunction.apply(DescribeTableFunction.java:53)
at 
com.amazonaws.services.dynamodbv2.waiters.DescribeTableFunction.apply(DescribeTableFunction.java:24)
at 
com.amazonaws.waiters.WaiterExecution.getCurrentState(WaiterExecution.java:102)
at 
com.amazonaws.waiters.WaiterExecution.pollResource(WaiterExecution.java:74)
at