gaocho commented on issue #3640:
URL: https://github.com/apache/polaris/issues/3640#issuecomment-3885603188

   > [@gaocho](https://github.com/gaocho) : the critical piece of data to 
ensure communication with the right service is the `endpoint`, not credential 
vending.
   > 
   > Please note that the `s3.endpoint` setting in your example is not relevant 
to Polaris. The right setting should be in the Catalog's Storage Config.
   > 
   > Please review RustFS, MinIO or Apache Ozone examples:
   > 
   > * 
https://polaris.apache.org/in-dev/unreleased/getting-started/creating-a-catalog/s3/
   > * https://github.com/apache/polaris/tree/main/getting-started
   
   Thank you for reaching back to me @dimas-b                                   
                                                                                
                                                            I confirmed the 
catalog Storage Config includes the NetApp endpoint and stsUnavailable=true. 
From Polaris CLI catalogs list, I see:
   
   storageConfigInfo.storageType: "s3"
   
   storageConfigInfo.endpoint: https://<netapp-endpoint>:10443
   
   storageConfigInfo.allowedLocations: ["s3://iceberg/warehouse"]
   
   storageConfigInfo.pathStyleAccess: true
   
   storageConfigInfo.stsUnavailable: true
   
   Despite that, Spark CREATE TABLE still fails with:
   
   ForbiddenException: The AWS Access Key Id you provided does not exist in our 
records. (Service: S3, Status Code: 403)
   
   This looks like Polaris/Iceberg is still attempting credential vending / 
sub-scoped creds during table creation (NetApp S3 doesn’t support STS), so it’s 
signing requests with a key that StorageGRID doesn’t recognize.
   
   Can you confirm what is the expected behavior in Polaris 1.3 for 
S3-compatible storage with stsUnavailable=true: should Polaris pass through the 
static keys from the client (Spark) without vending? If yes, what config/flags 
are required to ensure Polaris does not generate/vend credentials during CREATE 
TABLE?
   
   I’m attaching:
   
   catalogs list output showing storageConfigInfo endpoint + stsUnavailable
   
   Spark stack trace with the 403 “Access Key does not exist” error
   
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to