[ 
https://issues.apache.org/jira/browse/FLINK-8439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558266#comment-16558266
 ] 

ASF GitHub Bot commented on FLINK-8439:
---------------------------------------

aljoscha commented on a change in pull request #6405: [FLINK-8439] Add Flink 
shading to AWS credential provider s3 hadoop c…
URL: https://github.com/apache/flink/pull/6405#discussion_r205441268
 
 

 ##########
 File path: 
flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/fs/hdfs/AbstractS3FileSystemFactory.java
 ##########
 @@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.fs.hdfs;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.fs.FileSystemFactory;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.net.URI;
+
+/** Base class for S3 file system factories. */
+public abstract class AbstractS3FileSystemFactory implements FileSystemFactory 
{
 
 Review comment:
   I don't like putting S3 specifics into the generic Hadoop FS package. We 
could call this one `AbstractHadoopFileSystemFactory`, leave out the 
`getScheme()` implementation, and drop mentions of S3  to make it properly 
independent of S3.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Document using a custom AWS Credentials Provider with flink-3s-fs-hadoop
> ------------------------------------------------------------------------
>
>                 Key: FLINK-8439
>                 URL: https://issues.apache.org/jira/browse/FLINK-8439
>             Project: Flink
>          Issue Type: Improvement
>          Components: Documentation
>            Reporter: Dyana Rose
>            Assignee: Andrey Zagrebin
>            Priority: Critical
>              Labels: pull-request-available
>             Fix For: 1.4.3, 1.5.3
>
>
> This came up when using the s3 for the file system backend and running under 
> ECS.
> With no credentials in the container, hadoop-aws will default to EC2 instance 
> level credentials when accessing S3. However when running under ECS, you will 
> generally want to default to the task definition's IAM role.
> In this case you need to set the hadoop property
> {code:java}
> fs.s3a.aws.credentials.provider{code}
> to a fully qualified class name(s). see [hadoop-aws 
> docs|https://github.com/apache/hadoop/blob/1ba491ff907fc5d2618add980734a3534e2be098/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md]
> This works as expected when you add this setting to flink-conf.yaml but there 
> is a further 'gotcha.'  Because the AWS sdk is shaded, the actual full class 
> name for, in this case, the ContainerCredentialsProvider is
> {code:java}
> org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider{code}
>  
> meaning the full setting is:
> {code:java}
> fs.s3a.aws.credentials.provider: 
> org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider{code}
> If you instead set it to the unshaded class name you will see a very 
> confusing error stating that the ContainerCredentialsProvider doesn't 
> implement AWSCredentialsProvider (which it most certainly does.)
> Adding this information (how to specify alternate Credential Providers, and 
> the name space gotcha) to the [AWS deployment 
> docs|https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/aws.html]
>  would be useful to anyone else using S3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to