[ 
https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17809524#comment-17809524
 ] 

ASF GitHub Bot commented on HADOOP-19044:
-----------------------------------------

ahmarsuhail commented on code in PR #6479:
URL: https://github.com/apache/hadoop/pull/6479#discussion_r1462026344


##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ACrossRegionAccess.java:
##########
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+
+import static org.apache.hadoop.fs.s3a.Constants.AWS_REGION;
+import static org.apache.hadoop.fs.s3a.Constants.CENTRAL_ENDPOINT;
+import static org.apache.hadoop.fs.s3a.Constants.ENDPOINT;
+import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides;
+
+/**
+ * Test to verify cross region bucket access.
+ */
+public class ITestS3ACrossRegionAccess extends AbstractS3ATestBase {
+
+  @Test
+  public void testCentralEndpointCrossRegionAccess() throws Throwable {

Review Comment:
   might be better to move ITestsS3AEndpoint instead of creating a new test 
class?



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ACrossRegionAccess.java:
##########
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+
+import static org.apache.hadoop.fs.s3a.Constants.AWS_REGION;
+import static org.apache.hadoop.fs.s3a.Constants.CENTRAL_ENDPOINT;
+import static org.apache.hadoop.fs.s3a.Constants.ENDPOINT;
+import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides;
+
+/**
+ * Test to verify cross region bucket access.
+ */
+public class ITestS3ACrossRegionAccess extends AbstractS3ATestBase {
+
+  @Test
+  public void testCentralEndpointCrossRegionAccess() throws Throwable {
+    describe("Create bucket on different region and access it using central 
endpoint");
+    Configuration conf = getConfiguration();
+    removeBaseAndBucketOverrides(conf, ENDPOINT, AWS_REGION);
+
+    Configuration newConf = new Configuration(conf);
+
+    newConf.set(ENDPOINT, CENTRAL_ENDPOINT);
+
+    try (S3AFileSystem newFs = new S3AFileSystem()) {

Review Comment:
   if you look at the tests in ` ITestS3AEndpointRegion` you can see how we do 
it there, instead of creating anything, we just intercept the request and check 
if things are being set correctly. 
   
   For this what we want to see is that if the central point is configured, and 
no region is configured .. region gets set to US_EAST_2 for cross region. 
   
   But if central endpoint is configured, and region is configured to US_EAST_1 
, then region is US_EAST_1. that is region config takes precedence. See 
testCentralEndpoint in that class, does something similar.



##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##########
@@ -320,6 +321,11 @@ private <BuilderT extends S3BaseClientBuilder<BuilderT, 
ClientT>, ClientT> void
       origin = "SDK region chain";
     }
 
+    if (endpointStr != null && endpointStr.endsWith(CENTRAL_ENDPOINT)) {

Review Comment:
   instead of this, I think what we should do is on line 307, where it says:
   
   ```
       if (region != null) {
         builder.region(region);
       } 
   ```
   
   do 
   
   ```
       if (region != null) {
         builder.region(region);
         if (region == US_EAST_1 && endpointStr != null && 
endpointStr.endsWith(CENTRAL_ENDPOINT)) {
           builder.crossRegionAccessEnabled(true);
          LOG.debug("Enabling cross region access for endpoint {}", 
endpointStr);
         }
       } 
   ```
   
   you will only get into that if block if you haven't set 
`fs.s3a.endpoint.region` and a region **could** be determined from your 
`fs.s3a.endpoint.region` , which will happen in this case. As 
`getS3RegionFromEndpoint` will return US_EAST_1 for s3.amazonaws.com.
   
   Currently we are setting cross region enabled whenever fs.s3a.endpoint = 
s3.amazonaws.com, even if you know your region and you've set it correctly in 
fs.s3a.endpoint.region. 



##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##########
@@ -320,6 +321,11 @@ private <BuilderT extends S3BaseClientBuilder<BuilderT, 
ClientT>, ClientT> void
       origin = "SDK region chain";
     }
 
+    if (endpointStr != null && endpointStr.endsWith(CENTRAL_ENDPOINT)) {

Review Comment:
   let's also set the region to US_EAST_2 when enabling cross region. When I 
was doing this work a few months ago, I found that cross region with US_EAST_1 
was showing some weird behaviours..I didn't dive into them at the time. 
Everything worked as expected with US_EAST_2.





> AWS SDK V2 - Update S3A region logic 
> -------------------------------------
>
>                 Key: HADOOP-19044
>                 URL: https://issues.apache.org/jira/browse/HADOOP-19044
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.4.0
>            Reporter: Ahmar Suhail
>            Assignee: Viraj Jasani
>            Priority: Major
>              Labels: pull-request-available
>
> If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set 
> fs.s3a.endpoint to 
> s3.amazonaws.com here:
> [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540]
>  
>  
> HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is 
> set, or if a region can be parsed from fs.s3a.endpoint (which will happen in 
> this case, region will be US_EAST_1), cross region access is not enabled. 
> This will cause 400 errors if the bucket is not in US_EAST_1. 
>  
> Proposed: Updated the logic so that if the endpoint is the global 
> s3.amazonaws.com , cross region access is enabled.  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to