[ 
https://issues.apache.org/jira/browse/HADOOP-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14351325#comment-14351325
 ] 

Hadoop QA commented on HADOOP-11670:
------------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12703143/HADOOP-11670-003.patch
  against trunk revision 608ebd5.

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-aws.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5877//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5877//console

This message is automatically generated.

> Regression: s3a auth setup broken 
> ----------------------------------
>
>                 Key: HADOOP-11670
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11670
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.7.0
>            Reporter: Adam Budde
>            Assignee: Adam Budde
>            Priority: Blocker
>             Fix For: 2.7.0
>
>         Attachments: HADOOP-11670-001.patch, HADOOP-11670-003.patch, 
> HADOOP-11670.002.patch
>
>
> One big advantage provided by the s3a filesystem is the ability to use an IAM 
> instance profile in order to authenticate when attempting to access an S3 
> bucket from an EC2 instance. This eliminates the need to deploy AWS account 
> credentials to the instance or to provide them to Hadoop via the 
> fs.s3a.awsAccessKeyId and fs.s3a.awsSecretAccessKey params.
> The patch submitted to resolve HADOOP-10714 breaks this behavior by using the 
> S3Credentials class to read the value of these two params. The change in 
> question is presented below:
> S3AFileSystem.java, lines 161-170:
> {code}
>     // Try to get our credentials or just connect anonymously
>     S3Credentials s3Credentials = new S3Credentials();
>     s3Credentials.initialize(name, conf);
>     AWSCredentialsProviderChain credentials = new AWSCredentialsProviderChain(
>         new BasicAWSCredentialsProvider(s3Credentials.getAccessKey(),
>                                         s3Credentials.getSecretAccessKey()),
>         new InstanceProfileCredentialsProvider(),
>         new AnonymousAWSCredentialsProvider()
>     );
> {code}
> As you can see, the getAccessKey() and getSecretAccessKey() methods from the 
> S3Credentials class are now used to provide constructor arguments to 
> BasicAWSCredentialsProvider. These methods will raise an exception if the 
> fs.s3a.awsAccessKeyId or fs.s3a.awsSecretAccessKey params are missing, 
> respectively. If a user is relying on an IAM instance profile to authenticate 
> to an S3 bucket and therefore doesn't supply values for these params, they 
> will receive an exception and won't be able to access the bucket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to