[ 
https://issues.apache.org/jira/browse/HADOOP-18073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17584843#comment-17584843
 ] 

ASF GitHub Bot commented on HADOOP-18073:
-----------------------------------------

ahmarsuhail commented on PR #4706:
URL: https://github.com/apache/hadoop/pull/4706#issuecomment-1227250089

   Thanks @dannycjones, have moved those things to constants.
   
   RE using `eu-west-1` and the region logic in general:
   
   Not using `us-east-1` as `headBucket()` fails with that region. This is 
because `us-east-1` uses the endpoint `s3.amazonaws.com`, which resolves 
`bucket.s3.amazonaws.com` to the actual region the bucket is in. As the request 
is signed with us-east-1 and not the bucket's region, it fails. For more info, 
see [this](https://github.com/aws/aws-sdk-java/issues/1338) issue.
   
   I'm not sure how the region logic should behave and how it should handle 
failures. Returning `us-east-1` at the end 
[here](https://github.com/apache/hadoop/pull/4706/files#diff-a8dabc9bdb3ac3b04f92eadd1e3d9a7076d8983ca4fb7d1d146a1ac725caa309R558)
 is not of much use, as without cross region access, if the region is 
configured incorrectly, any request to S3 will fail. Instead, to handle network 
failures etc, we should probably add some retry logic in 
[this](https://github.com/apache/hadoop/pull/4706/files#diff-a8dabc9bdb3ac3b04f92eadd1e3d9a7076d8983ca4fb7d1d146a1ac725caa309R530)
 method.  
   
   I'm also not sure if this new region/endpoint logic is sufficient to handle 
third party stores, keen to know what other people think. 




> Upgrade AWS SDK to v2
> ---------------------
>
>                 Key: HADOOP-18073
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18073
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: auth, fs/s3
>    Affects Versions: 3.3.1
>            Reporter: xiaowei sun
>            Assignee: Ahmar Suhail
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: Upgrading S3A to SDKV2.pdf
>
>
> This task tracks upgrading Hadoop's AWS connector S3A from AWS SDK for Java 
> V1 to AWS SDK for Java V2.
> Original use case:
> {quote}We would like to access s3 with AWS SSO, which is supported inĀ 
> software.amazon.awssdk:sdk-core:2.*.
> In particular, from 
> [https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html],
>  when to set 'fs.s3a.aws.credentials.provider', it must be 
> "com.amazonaws.auth.AWSCredentialsProvider". We would like to support 
> "software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider" which 
> supports AWS SSO, so users only need to authenticate once.
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to