[ 
https://issues.apache.org/jira/browse/HADOOP-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17723487#comment-17723487
 ] 

Steve Loughran commented on HADOOP-18743:
-----------------------------------------

ok. this is bigger than it looks if we want to cut the v1 stuff out completely

* DT code needs to move to v2 credentials (doable)
* fixup code in credential provider list needs to patch classname strings 
before trying to load
* supporting third party credential providers is going to be tricky.

proposed: use v1 sdk as *provided*, with enforcement rule to restrict to a 
single V1SdkSupport class, which is only instantiated when a classname is 
provided which isn't a v2 credential provider. That way, you only need the v1 
sdk if your own code depends on it.


> convert declarations of AWS v1 SDK EnvironmentVariableCredentialsProvider to 
> v2 version
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-18743
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18743
>             Project: Hadoop Common
>          Issue Type: Sub-task
>            Reporter: Steve Loughran
>            Priority: Major
>
> As I play with the v2 sdk I've cut the v1 sdk from the tools/lib and now I'm 
> getting stack traces about missing class 
> EnvironmentVariableCredentialsProvider
> {code}
> java.io.IOException: From option fs.s3a.aws.credentials.provider 
> java.lang.ClassNotFoundException: Class 
> com.amazonaws.auth.EnvironmentVariableCredentialsProvider not found
>         at 
> org.apache.hadoop.fs.s3a.auth.AwsCredentialListProvider.loadAWSProviderClasses(AwsCredentialListProvider.java:128)
>         at 
> org.apache.hadoop.fs.s3a.auth.AwsCredentialListProvider.buildAWSProviderList(AwsCredentialListProvider.java:167)
>         at 
> org.apache.hadoop.fs.s3a.auth.AwsCredentialListProvider.createAWSCredentialProviderSet(AwsCredentialListProvider.java:102)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.bindAWSClient(S3AFileSystem.java:945)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:597)
>         at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3595)
>         at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:171)
>         at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3696)
>         at 
> org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3653)
>         at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:608)
>         at 
> org.apache.hadoop.fs.store.diag.StoreDiag.executeFileSystemOperations(StoreDiag.java:753)
>  
> this provider is listed in the fs.s3a.aws.credentials.provider chain
> {code}
> <property>
>   <name>fs.s3a.aws.credentials.provider</name>
>   <value>
>     org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider,
>     org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,
>     com.amazonaws.auth.EnvironmentVariableCredentialsProvider,
>     org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider
>   </value>
> ...
> {code}
> Ideally we should remove all v1 dependencies, explicitly (here) and 
> implicitly.
> maybe for the env vars, we should consider adding our own env var provider 
> which could go into branch-3.3 *now* as a single patch, which can then be 
> cherrypicked by anyone who wants it in older releases; the v2 version will 
> work with the new api



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to