Kontinuation opened a new pull request, #1817: URL: https://github.com/apache/datafusion-comet/pull/1817
## Which issue does this PR close? <!-- We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes #123` indicates that this PR will close issue #123. --> Closes #1766. ## Rationale for this change <!-- Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed. Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes. --> The `native_datafusion` parquet scanner does not configure the object_store client using Hadoop S3A configurations. The AWS credentials for accessing S3 such as `spark.hadoop.fs.s3a.access.key` will be ignored by `native_datafusion`, which leads to authentication failures when reading data on S3. ## What changes are included in this PR? <!-- There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR. --> This patch translates commonly used Hadoop S3A configurations (mostly for setting up credentials) to object_store counterpart. AWS allows accessing S3 using variety of authentication methods, while object_store only supports some of them. We depend on `aws-config` and `aws-credential-types` to provide better support for our complex use cases of AWS S3 credentials. Including WebIdentityToken and AssumedRole credential. The Hadoop S3A configuration translation support is only added to `native_datafusion`. `native_iceberg_compat` may need to integrate with iceberg catalog configuration and the config translation could be handled differently, so we leave it as a future work. ## How are these changes tested? <!-- We typically require tests for all PRs in order to: 1. Prevent the code from being accidentally broken by subsequent changes 2. Serve as another way to document the expected behavior of the code If tests are not included in your PR, please explain why (for example, are they covered by existing tests)? --> 1. We define a structural stub `CredentialProviderMetadata` for easier testing the correctness of AWS credential providers built by the native code. Please refer to the tests in `s3.rs` for details. 2. We added an end-to-end test using minio testcontainer (`ParquetReadFromS3Suite`). This test runs in `native_comet` and `native_datafusion` mode. 3. We manually tested this locally and in our cloud environment. It works for all the AWS credentials we use, including anonymous credential, simple/temporary credential, assumed role credential, EC2 instance profile credential and web identity token credential. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For additional commands, e-mail: github-h...@datafusion.apache.org