[jira] [Commented] (FLINK-13602) S3 filesystems effectively do not support credential providers

2021-04-29 Thread Flink Jira Bot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17336447#comment-17336447
 ] 

Flink Jira Bot commented on FLINK-13602:


This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> S3 filesystems effectively do not support credential providers
> --
>
> Key: FLINK-13602
> URL: https://issues.apache.org/jira/browse/FLINK-13602
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: stale-major
>
> To provide credentials to S3 users may configure a credentials provider. For 
> providers from amazon (which are relocated) we allow users to configure the 
> original class name, and relocate it manually in the S3 filesystem factories.
> However, none of the amazon provided credential providers can be used with 
> the Presto filesystem, since it _additionally_ requires them to have a 
> constructor accepting a hadoop configuration. 
> (https://prestodb.github.io/docs/current/connector/hive.html#amazon-s3-configuration)
> {{hadoop-aws}} _does_ include a number of credential providers that have this 
> constructor, however these use configuration keys that aren't mirrored from 
> the flink config. (they expect {{fs.s3a}} as a key-prefix), not to mention 
> that users would have to configure the relocated class (since the S3 factory 
> only manually relocates amazon classes).
> Finally, a custom implementation of the credentials provider can effectively 
> be ruled out since they too would have to be implemented against the 
> relocated amazon/hadoop classes, which we can't really expect users to do.
> In summary, amazon providers aren't working since they don't have a 
> constructor that presto requires, hadoop providers don't work since we don't 
> mirror the required configuration keys, and custom providers are unreasonable 
> as they'd have to be implemented against relocated classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13602) S3 filesystems effectively do not support credential providers

2021-04-22 Thread Flink Jira Bot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17328361#comment-17328361
 ] 

Flink Jira Bot commented on FLINK-13602:


This major issue is unassigned and itself and all of its Sub-Tasks have not 
been updated for 30 days. So, it has been labeled "stale-major". If this ticket 
is indeed "major", please either assign yourself or give an update. Afterwards, 
please remove the label. In 7 days the issue will be deprioritized.

> S3 filesystems effectively do not support credential providers
> --
>
> Key: FLINK-13602
> URL: https://issues.apache.org/jira/browse/FLINK-13602
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: stale-major
>
> To provide credentials to S3 users may configure a credentials provider. For 
> providers from amazon (which are relocated) we allow users to configure the 
> original class name, and relocate it manually in the S3 filesystem factories.
> However, none of the amazon provided credential providers can be used with 
> the Presto filesystem, since it _additionally_ requires them to have a 
> constructor accepting a hadoop configuration. 
> (https://prestodb.github.io/docs/current/connector/hive.html#amazon-s3-configuration)
> {{hadoop-aws}} _does_ include a number of credential providers that have this 
> constructor, however these use configuration keys that aren't mirrored from 
> the flink config. (they expect {{fs.s3a}} as a key-prefix), not to mention 
> that users would have to configure the relocated class (since the S3 factory 
> only manually relocates amazon classes).
> Finally, a custom implementation of the credentials provider can effectively 
> be ruled out since they too would have to be implemented against the 
> relocated amazon/hadoop classes, which we can't really expect users to do.
> In summary, amazon providers aren't working since they don't have a 
> constructor that presto requires, hadoop providers don't work since we don't 
> mirror the required configuration keys, and custom providers are unreasonable 
> as they'd have to be implemented against relocated classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13602) S3 filesystems effectively do not support credential providers

2019-12-13 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995629#comment-16995629
 ] 

Steve Loughran commented on FLINK-13602:


not much we can do in S3A to help here. Sorry. 

Could you add a provider for presto which takes a hadoop config as a 
constructor but underneath instantiates its own set of credential providers? 

> S3 filesystems effectively do not support credential providers
> --
>
> Key: FLINK-13602
> URL: https://issues.apache.org/jira/browse/FLINK-13602
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Major
>
> To provide credentials to S3 users may configure a credentials provider. For 
> providers from amazon (which are relocated) we allow users to configure the 
> original class name, and relocate it manually in the S3 filesystem factories.
> However, none of the amazon provided credential providers can be used with 
> the Presto filesystem, since it _additionally_ requires them to have a 
> constructor accepting a hadoop configuration. 
> (https://prestodb.github.io/docs/current/connector/hive.html#amazon-s3-configuration)
> {{hadoop-aws}} _does_ include a number of credential providers that have this 
> constructor, however these use configuration keys that aren't mirrored from 
> the flink config. (they expect {{fs.s3a}} as a key-prefix), not to mention 
> that users would have to configure the relocated class (since the S3 factory 
> only manually relocates amazon classes).
> Finally, a custom implementation of the credentials provider can effectively 
> be ruled out since they too would have to be implemented against the 
> relocated amazon/hadoop classes, which we can't really expect users to do.
> In summary, amazon providers aren't working since they don't have a 
> constructor that presto requires, hadoop providers don't work since we don't 
> mirror the required configuration keys, and custom providers are unreasonable 
> as they'd have to be implemented against relocated classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)