terry-xu-2021 opened a new issue, #6303:
URL: https://github.com/apache/seatunnel/issues/6303

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   I have put aws-java-sdk-bundle-1.11.271.jar and hadoop-aws-3.1.4.jar into 
seatunnel-web/libs. After I created a s3 datasource, an error occured when I 
clicked Test Connect button in UI. Message datasource [{0}] invalid pop-up. 
   
   ### SeaTunnel Version
   
   seatunnel-engine: 2.3.3
   seatunnel-web: 1.0.0
   
   ### SeaTunnel Config
   
   ```conf
   datasource UI params like follows:
       path="/test.txt"
       fs.s3a.endpoint="http://x.x.x.x:9000";
       fs.s3a.aws.credentials.provider = 
"org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider"
       access_key = "****"
       secret_key = "****"
       bucket = "s3a://test"
   With same config at the seatunnle-cluster side, the command seatunnel.sh can 
be executed successfully.
   ```
   
   
   ### Running Command
   
   ```shell
   By web UI
   ```
   
   
   ### Error Exception
   
   ```log
   Logs as follows:
   
   Caused by: java.io.IOException: 
org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider instantiation exception: 
java.lang.NoSuchMethodError: 
com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;Ljava/lang/Object;)V
           at 
org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:678)
           at 
org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:566)
           at 
org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
           at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:256)
           at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
           at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
           at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
           at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
           at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
           at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:227)
           at 
org.apache.seatunnel.datasource.plugin.s3.S3DatasourceChannel.checkDataSourceConnectivity(S3DatasourceChannel.java:65)
           ... 65 common frames omitted
   Caused by: java.lang.NoSuchMethodError: 
com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;Ljava/lang/Object;)V
           at 
org.apache.hadoop.fs.s3a.S3AUtils.lookupPassword(S3AUtils.java:742)
           at 
org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider.<init>(SimpleAWSCredentialsProvider.java:59)
           at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method)
           at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
           at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
           at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
           at 
org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:635)
           ... 75 common frames omitted
   
   I think there are conflics in the libs. How to do?
   ```
   
   
   ### Zeta or Flink or Spark Version
   
   _No response_
   
   ### Java or Scala Version
   
   _No response_
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to