ang6300 commented on issue #32:
URL: https://github.com/apache/polaris/issues/32#issuecomment-2537543524

   Hello @lefebsy 
   
   Thank you for all the details. 
   If I read it correctly, the run_spark_sql_s3compatible.sh does not use role 
arn.  
   
   Is it possible to use role-arn but with custom sts endpoint
   
   [trino-user@trino-m regtests]$ grep role run_spark_sql.sh
   #   ./run_spark_sql.sh [S3-location AWS-IAM-role]
   #       - [AWS-IAM-role] - The AWS IAM role for catalog to assume when 
accessing the S3 location.
   #     ./run_spark_sql.sh s3://my-bucket/path 
arn:aws:iam::123456789001:role/my-role
     echo "Usage: ./run_spark_sql.sh [S3-location AWS-IAM-role]"
    
    Currently I am able to use run_spark_sql.sh with S3-location AWS-IAM-role 
with on premise s3 compatible storage.  
    It creates the metadata object but not the parquet. 
    
    aws s3 ls --recursive s3://polaris-sg/sg_db1
   2024-12-11 23:34:03       1072 
sg_db1/table1/metadata/00000-e77a2d71-ee13-4bd7-89a8-2450650d5249.metadata.json
   
   failed to write the parquet file.
   
s3://polaris-sg/sg_db1/table1/data/00000-0-49bf02f5-8632-45b1-be33-95e1658e3a33-0-00001.parquet
   
   The polaris-sg bucket is on s3 compatible storage, not on AWS S3.  
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to