[ 
https://issues.apache.org/jira/browse/HADOOP-18073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17582586#comment-17582586
 ] 

ASF GitHub Bot commented on HADOOP-18073:
-----------------------------------------

ahmarsuhail commented on code in PR #4706:
URL: https://github.com/apache/hadoop/pull/4706#discussion_r950871357


##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##########
@@ -390,4 +490,63 @@ protected static AmazonS3 configureAmazonS3Client(AmazonS3 
s3,
         endpoint, epr, region);
     return new AwsClientBuilder.EndpointConfiguration(endpoint, region);
   }
+
+  /**
+   * Given a endpoint string, create the endpoint URI.
+   *
+   * @param endpoint possibly null endpoint.
+   * @param conf config to build the URI from.
+   * @return an endpoint uri
+   */
+  private static URI getS3Endpoint(String endpoint, final Configuration conf) {
+
+    boolean secureConnections = conf.getBoolean(SECURE_CONNECTIONS, 
DEFAULT_SECURE_CONNECTIONS);
+
+    String protocol = secureConnections ? HTTPS : HTTP;
+
+    if (endpoint == null || endpoint.isEmpty()) {
+      // the default endpoint
+      endpoint = CENTRAL_ENDPOINT;
+    }
+
+    if (!endpoint.contains("://")) {
+      endpoint = String.format("%s://%s", protocol, endpoint);
+    }
+
+    try {
+      return new URI(endpoint);
+    } catch (URISyntaxException e) {
+      throw new IllegalArgumentException(e);
+    }
+  }
+
+  /**
+   * Get the bucket region.
+   *
+   * @param region AWS S3 Region set in the config. This property may not be 
set, in which case
+   *               ask S3 for the region.
+   * @param s3Client A S3 Client with default config, used for getting the 
region of a bucket.
+   * @return region of the bucket.
+   */
+  private Region getS3Region(String region, S3Client s3Client) {

Review Comment:
   wondering what the behaviour here should be. If headBucket fails for reasons 
such as network failure, we should probably retry. If it continues to fail 
probably throw an error. 
   returning default US_EAST_1 just means the client will be configured with 
us_east_1, and then when it tries to make a call eg: s3.listObjects(), this 
will fail as bucket is not in us_east_1 and there is no cross region access  





> Upgrade AWS SDK to v2
> ---------------------
>
>                 Key: HADOOP-18073
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18073
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: auth, fs/s3
>    Affects Versions: 3.3.1
>            Reporter: xiaowei sun
>            Assignee: Ahmar Suhail
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: Upgrading S3A to SDKV2.pdf
>
>
> This task tracks upgrading Hadoop's AWS connector S3A from AWS SDK for Java 
> V1 to AWS SDK for Java V2.
> Original use case:
> {quote}We would like to access s3 with AWS SSO, which is supported inĀ 
> software.amazon.awssdk:sdk-core:2.*.
> In particular, from 
> [https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html],
>  when to set 'fs.s3a.aws.credentials.provider', it must be 
> "com.amazonaws.auth.AWSCredentialsProvider". We would like to support 
> "software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider" which 
> supports AWS SSO, so users only need to authenticate once.
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to