Ahmar Suhail created HADOOP-19044:
-------------------------------------

             Summary: AWS SDK V2 - Update S3A region logic 
                 Key: HADOOP-19044
                 URL: https://issues.apache.org/jira/browse/HADOOP-19044
             Project: Hadoop Common
          Issue Type: Sub-task
          Components: fs/s3
    Affects Versions: 3.4.0
            Reporter: Ahmar Suhail


If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set 
fs.s3a.endpoint to 

s3.amazonaws.com here:

[https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540]
 

 

HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is 
set, or if a region can be parsed from fs.s3a.endpoint (which will happen in 
this case, region will be US_EAST_1), cross region access is not enabled. This 
will cause 400 errors if the bucket is not in US_EAST_1. 

 

Proposed: Updated the logic so that if the endpoint is the global 
s3.amazonaws.com , cross region access is enabled.  

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Reply via email to