Dyana Rose created FLINK-8439:
---------------------------------
Summary: Document using a custom AWS Credentials Provider with
flink-3s-fs-hadoop
Key: FLINK-8439
URL: https://issues.apache.org/jira/browse/FLINK-8439
Project: Flink
Issue Type: Improvement
Components: Documentation
Reporter: Dyana Rose
This came up when using the s3 for the file system backend and running under
ECS.
With no credentials in the container, hadoop-aws will default to EC2 instance
level credentials when accessing S3. However when running under ECS, you will
generally want to default to the task definition's IAM role.
In this case you need to set the hadoop property
{code:java}
fs.s3a.aws.credentials.provider{code}
to a fully qualified class name(s). see [hadoop-aws
docs|https://github.com/apache/hadoop/blob/1ba491ff907fc5d2618add980734a3534e2be098/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md]
This works as expected when you add this setting to flink-conf.yaml but there
is a further 'gotcha.' Because the AWS sdk is shaded, the actual full class
name for, in this case, the ContainerCredentialsProvider is
{code:java}
org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider{code}
meaning the full setting is:
{code:java}
fs.s3a.aws.credentials.provider:
org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider{code}
If you instead set it to the unshaded class name you will see a very confusing
error stating that the ContainerCredentialsProvider doesn't implement
AWSCredentialsProvider (which it most certainly does.)
Adding this information (how to specify alternate Credential Providers, and the
name space gotcha) to the [AWS deployment
docs|https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/aws.html]
would be useful to anyone else using S3.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)