[ 
https://issues.apache.org/jira/browse/HADOOP-18684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17707246#comment-17707246
 ] 

ASF GitHub Bot commented on HADOOP-18684:
-----------------------------------------

steveloughran commented on code in PR #5521:
URL: https://github.com/apache/hadoop/pull/5521#discussion_r1154393808


##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUrlScheme.java:
##########
@@ -1,31 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.hadoop.fs.s3a;
 
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.junit.Test;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
-import org.junit.Test;
-
-import java.io.IOException;
 
 public class ITestS3AUrlScheme extends AbstractS3ATestBase{
-
-  @Override
-  public void setup() throws Exception {
-    super.setup();
-  }
-
   @Override
   protected Configuration createConfiguration() {
     Configuration conf = super.createConfiguration();
-    conf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem");
+    conf.set("fs.s3.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem");
     return conf;
   }
 
   @Test
-  public void testFSScheme() throws IOException {
-    FileSystem fs = getFileSystem();
-    assertEquals(fs.getScheme(), "s3a");
+  public void testFSScheme() throws IOException, URISyntaxException {
+    FileSystem fs = FileSystem.get(new URI("s3://mybucket/path"),

Review Comment:
   use a try-with-resources here so it is cleaned up after the test case.



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/fileContext/ITestS3AFileContext.java:
##########
@@ -13,11 +13,31 @@
  */
 package org.apache.hadoop.fs.s3a.fileContext;
 
-import org.apache.hadoop.fs.TestFileContext;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.*;
+import org.junit.Test;
+
+import java.io.IOException;

Review Comment:
   checkstyle says unused; cut



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/fileContext/ITestS3AFileContext.java:
##########
@@ -13,11 +13,31 @@
  */
 package org.apache.hadoop.fs.s3a.fileContext;
 
-import org.apache.hadoop.fs.TestFileContext;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.*;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;

Review Comment:
   move these to the top, ahead of the other imports. you may want to check 
your ide settings here, to avoid so much pain



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/fileContext/ITestS3AFileContext.java:
##########
@@ -13,11 +13,31 @@
  */
 package org.apache.hadoop.fs.s3a.fileContext;
 
-import org.apache.hadoop.fs.TestFileContext;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.*;

Review Comment:
   explicit imports please; less brittle



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUrlScheme.java:
##########
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+public class ITestS3AUrlScheme extends AbstractS3ATestBase{
+  @Override
+  protected Configuration createConfiguration() {
+    Configuration conf = super.createConfiguration();
+    conf.set("fs.s3.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem");
+    return conf;
+  }
+
+  @Test
+  public void testFSScheme() throws IOException, URISyntaxException {
+    FileSystem fs = FileSystem.get(new URI("s3://mybucket/path"),
+        getConfiguration());
+    assertEquals("s3a", fs.getScheme());
+    Path path = fs.makeQualified(new Path("tmp/path"));
+    assertEquals("s3", path.toUri().getScheme());
+  }

Review Comment:
   add a test to verify that fs.getScheme() also equals s3. Currently it wont 
as it is hard coded. we need something to adapt the scheme once `initialize()` 
is called. it still needs to return a value before then, for which s3a will be 
it
   
   ```java
     public String getScheme() {
       return uri != null
         ? uri.getScheme()
         : "s3a";
     }
   ```
   



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/fileContext/ITestS3AFileContext.java:
##########
@@ -13,11 +13,31 @@
  */
 package org.apache.hadoop.fs.s3a.fileContext;
 
-import org.apache.hadoop.fs.TestFileContext;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.*;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import static org.junit.Assert.assertEquals;
 
 /**
  * Implementation of TestFileContext for S3a.
  */
 public class ITestS3AFileContext extends TestFileContext{
 
+  @Test
+  public void testScheme()
+      throws URISyntaxException, UnsupportedFileSystemException {
+    Configuration conf = new Configuration();
+    URI uri = new URI("s3://mybucket/path");
+    conf.set("fs.AbstractFileSystem.s3.impl",
+        "org.apache.hadoop.fs.s3a.S3A");
+    FileContext fc = FileContext.getFileContext(uri, conf);

Review Comment:
   if FileContext implemented Closeable it'd need closing too, but it doesn't, 
so we can't





> Fix S3A filesystem such that the scheme matches the URI scheme
> --------------------------------------------------------------
>
>                 Key: HADOOP-18684
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18684
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 3.3.5
>            Reporter: Harshit Gupta
>            Priority: Major
>              Labels: pull-request-available
>
> Certain codepaths use the FileContext API's to perform FS based operations 
> such as yarn log aggregations. While trying to reuse the S3A connector for 
> GCS based workloads the yarn log aggregation was not happening. Upon further 
> investigation it was observed that FileContext API have hardcoded URI scheme 
> checks that need to disabled/updated to make S3A compatible with non AWS 
> stores.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to