MissaouiAhmed commented on issue #2552:
URL: https://github.com/apache/polaris/issues/2552#issuecomment-3285794103

   @dimas-b 
   Thanks for help
   
   
   below Docker Logs
   ```
   minio-polaris-setup-1  | 15:38:23.540506 [0-0] == Info: [WRITE] 
header_collect pushed(type=1, len=2) -> 0
   minio-polaris-setup-1  | 15:38:23.540682 [0-0] == Info: [WRITE] [OUT] wrote 
2 header bytes -> 2
   minio-polaris-setup-1  | 15:38:23.540839 [0-0] == Info: [WRITE] [PAUSE] 
writing 2/2 bytes of type 4 -> 0
   minio-polaris-setup-1  | 15:38:23.541016 [0-0] == Info: [WRITE] 
download_write header(type=4, blen=2) -> 0
   minio-polaris-setup-1  | 15:38:23.541197 [0-0] == Info: [WRITE] 
client_write(type=4, len=2) -> 0
   minio-polaris-setup-1  | 15:38:23.541377 [0-0] <= Recv data, 152 bytes (0x98)
   minio-polaris-setup-1  | 0000: {"error":{"message":"Cannot create Catalog 
quickstart_catalog. C
   minio-polaris-setup-1  | 0040: atalog already exists or resolution 
failed","type":"AlreadyExist
   minio-polaris-setup-1  | 0080: sException","code":409}}
   minio-polaris-setup-1  | 15:38:23.541907 [0-0] == Info: [WRITE] [OUT] wrote 
152 body bytes -> 152
   minio-polaris-setup-1  | 15:38:23.542070 [0-0] == Info: [WRITE] [PAUSE] 
writing 152/152 bytes of type 1 -> 0
   minio-polaris-setup-1  | 15:38:23.542253 [0-0] == Info: [WRITE] 
download_write body(type=1, blen=152) -> 0
   minio-polaris-setup-1  | 15:38:23.542434 [0-0] == Info: [WRITE] 
client_write(type=1, len=152) -> 0
   minio-polaris-setup-1  | 15:38:23.542601 [0-0] == Info: [WRITE] 
xfer_write_resp(len=230, eos=0) -> 0
   minio-polaris-setup-1  | 15:38:23.542805 [0-0] == Info: [WRITE] [OUT] done
   minio-polaris-setup-1  | 15:38:23.542937 [0-0] == Info: [READ] client_reset, 
clear readers
   minio-polaris-setup-1  | 15:38:23.543093 [0-0] == Info: Connection #0 to 
host polaris left intact
   minio-polaris-setup-1  | {"error":{"message":"Cannot create Catalog 
quickstart_catalog. Catalog already exists or resolution 
failed","type":"AlreadyExistsException","code":409}}Done.
   minio-polaris-setup-1  | Extra grants...
   minio-polaris-setup-1  |   % Total    % Received % Xferd  Average Speed   
Time    Time     Time  Current
   minio-polaris-setup-1  |                                  Dload  Upload   
Total   Spent    Left  Speed
   minio-polaris-1        | 2025-09-12 15:38:23,556 INFO  
[io.qua.htt.access-log] [,POLARIS] [,,,] (vert.x-eventloop-thread-6) 172.21.0.5 
- - [12/Sep/2025:15:38:23 +0000] "PUT 
/api/management/v1/catalogs/quickstart_catalog/catalog-roles/catalog_admin/grants
 HTTP/1.1" 401 -
   100    56    0     0  100    56      0   6825 --:--:-- --:--:-- --:--:--  
8000
   
   
   
   
   
   
   minio-polaris-1        | 2025-09-12 15:41:23,925 INFO  
[io.qua.htt.access-log] [,POLARIS] [,,,] (executor-thread-1) 10.163.169.81 - - 
[12/Sep/2025:15:41:23 +0000] "POST /api/catalog/v1/oauth/tokens HTTP/1.1" 200 
753
   minio-polaris-1        | 2025-09-12 15:41:24,007 INFO  
[io.qua.htt.access-log] [,POLARIS] [,,,] (executor-thread-1) 10.163.169.81 - 
root [12/Sep/2025:15:41:24 +0000] "GET 
/api/catalog/v1/config?warehouse=quickstart_catalog HTTP/1.1" 200 2122
   minio-polaris-1        | 2025-09-12 15:41:25,881 INFO  
[org.apa.pol.ser.cat.ice.IcebergCatalogHandler] [,POLARIS] [,,,] 
(executor-thread-1) Initializing non-federated catalog
   minio-polaris-1        | 2025-09-12 15:41:25,888 INFO  
[io.qua.htt.access-log] [,POLARIS] [,,,] (executor-thread-1) 10.163.169.81 - 
root [12/Sep/2025:15:41:25 +0000] "GET 
/api/catalog/v1/quickstart_catalog/namespaces/ns HTTP/1.1" 200 82
   minio-polaris-1        | 2025-09-12 15:41:26,228 INFO  
[org.apa.pol.ser.exc.IcebergExceptionMapper] [,POLARIS] [,,,] 
(executor-thread-1) Handling runtimeException Table does not exist: ns.t1
   minio-polaris-1        | 2025-09-12 15:41:26,230 INFO  
[io.qua.htt.access-log] [,POLARIS] [,,,] (executor-thread-1) 10.163.169.81 - 
root [12/Sep/2025:15:41:26 +0000] "GET 
/api/catalog/v1/quickstart_catalog/namespaces/ns/tables/t1?snapshots=all 
HTTP/1.1" 404 92
   minio-polaris-1        | 2025-09-12 15:41:26,404 INFO  
[org.apa.pol.ser.exc.IcebergExceptionMapper] [,POLARIS] [,,,] 
(executor-thread-1) Handling runtimeException Principal 'root' with activated 
PrincipalRoles '[]' and activated grants via '[service_admin, catalog_admin]' 
is not authorized for op CREATE_TABLE_STAGED_WITH_WRITE_DELEGATION
   minio-polaris-1        | 2025-09-12 15:41:26,405 INFO  
[io.qua.htt.access-log] [,POLARIS] [,,,] (executor-thread-1) 10.163.169.81 - 
root [12/Sep/2025:15:41:26 +0000] "POST 
/api/catalog/v1/quickstart_catalog/namespaces/ns/tables HTTP/1.1" 403 239
   minio-polaris-1        | 2025-09-12 15:41:26,498 INFO  
[org.apa.pol.ser.exc.IcebergExceptionMapper] [,POLARIS] [,,,] 
(executor-thread-1) Handling runtimeException Table does not exist: ns.t1
   minio-polaris-1        | 2025-09-12 15:41:26,500 INFO  
[io.qua.htt.access-log] [,POLARIS] [,,,] (executor-thread-1) 10.163.169.81 - 
root [12/Sep/2025:15:41:26 +0000] "GET 
/api/catalog/v1/quickstart_catalog/namespaces/ns/tables/t1?snapshots=all 
HTTP/1.1" 404 92
   minio-polaris-1        | 2025-09-12 15:41:26,523 INFO  
[org.apa.pol.ser.exc.IcebergExceptionMapper] [,POLARIS] [,,,] 
(executor-thread-1) Handling runtimeException Table does not exist: ns.t1
   minio-polaris-1        | 2025-09-12 15:41:26,524 INFO  
[io.qua.htt.access-log] [,POLARIS] [,,,] (executor-thread-1) 10.163.169.81 - 
root [12/Sep/2025:15:41:26 +0000] "GET 
/api/catalog/v1/quickstart_catalog/namespaces/ns/tables/t1?snapshots=all 
HTTP/1.1" 404 92
   minio-polaris-1        | 2025-09-12 15:41:28,889 INFO  
[org.apa.pol.ser.exc.IcebergExceptionMapper] [,POLARIS] [,,,] 
(executor-thread-1) Handling runtimeException Table does not exist: ns.t1
   minio-polaris-1        | 2025-09-12 15:41:28,890 INFO  
[io.qua.htt.access-log] [,POLARIS] [,,,] (executor-thread-1) 10.163.169.81 - 
root [12/Sep/2025:15:41:28 +0000] "GET 
/api/catalog/v1/quickstart_catalog/namespaces/ns/tables/t1?snapshots=all 
HTTP/1.1" 404 92
   minio-polaris-1        | 2025-09-12 15:41:29,002 INFO  
[org.apa.pol.ser.exc.IcebergExceptionMapper] [,POLARIS] [,,,] 
(executor-thread-1) Handling runtimeException View does not exist: ns.t1
   minio-polaris-1        | 2025-09-12 15:41:29,003 INFO  
[io.qua.htt.access-log] [,POLARIS] [,,,] (executor-thread-1) 10.163.169.81 - 
root [12/Sep/2025:15:41:29 +0000] "GET 
/api/catalog/v1/quickstart_catalog/namespaces/ns/views/t1 HTTP/1.1" 404 90
   
   ```
   
   and this is spark-sql logs
   
   ./spark-sql --master "local[2]" --jars 
/jars/iceberg-aws-bundle-1.9.0.jar,/jars/iceberg-spark-runtime-3.5_2.12-1.9.0.jar
 \
    --conf 
spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
 \
    --conf spark.sql.catalog.polaris=org.apache.iceberg.spark.SparkCatalog \
    --conf spark.sql.catalog.polaris.type=rest \
    --conf spark.sql.catalog.polaris.uri=http://$POLARIS:8181/api/catalog \
    --conf spark.sql.catalog.polaris.token-refresh-enabled=false \
    --conf spark.sql.catalog.polaris.warehouse=quickstart_catalog \
    --conf spark.sql.catalog.polaris.scope=PRINCIPAL_ROLE:ALL \
    --conf 
spark.sql.catalog.polaris.header.X-Iceberg-Access-Delegation=vended-credentials 
\
    --conf spark.sql.catalog.polaris.credential=root:s3cr3t    
   
   
   ```
   spark-sql ()> create table ns.t1 as select 'abc';
   25/09/12 08:41:26 ERROR SparkSQLDriver: Failed in [create table ns.t1 as 
select 'abc']
   org.apache.iceberg.exceptions.ForbiddenException: Forbidden: Principal 
'root' with activated PrincipalRoles '[]' and activated grants via 
'[service_admin, catalog_admin]' is not authorized for op 
CREATE_TABLE_STAGED_WITH_WRITE_DELEGATION
           at 
org.apache.iceberg.rest.ErrorHandlers$DefaultErrorHandler.accept(ErrorHandlers.java:236)
           at 
org.apache.iceberg.rest.ErrorHandlers$TableErrorHandler.accept(ErrorHandlers.java:123)
           at 
org.apache.iceberg.rest.ErrorHandlers$TableErrorHandler.accept(ErrorHandlers.java:107)
           at 
org.apache.iceberg.rest.HTTPClient.throwFailure(HTTPClient.java:215)
           at org.apache.iceberg.rest.HTTPClient.execute(HTTPClient.java:299)
           at 
org.apache.iceberg.rest.BaseHTTPClient.post(BaseHTTPClient.java:88)
           at 
org.apache.iceberg.rest.RESTSessionCatalog$Builder.stageCreate(RESTSessionCatalog.java:921)
           at 
org.apache.iceberg.rest.RESTSessionCatalog$Builder.createTransaction(RESTSessionCatalog.java:799)
           at 
org.apache.iceberg.CachingCatalog$CachingTableBuilder.createTransaction(CachingCatalog.java:282)
           at 
org.apache.iceberg.spark.SparkCatalog.stageCreate(SparkCatalog.java:265)
           at 
org.apache.spark.sql.connector.catalog.StagingTableCatalog.stageCreate(StagingTableCatalog.java:93)
           at 
org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:120)
           at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
           at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
           at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:107)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:125)
           at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
           at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:107)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:461)
           at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:461)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:437)
           at 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:98)
           at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:85)
           at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:83)
           at org.apache.spark.sql.Dataset.<init>(Dataset.scala:220)
           at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
           at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
           at 
org.apache.spark.sql.SparkSession.$anonfun$sql$4(SparkSession.scala:691)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
           at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:682)
           at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:713)
           at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:744)
           at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:651)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:69)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:501)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:619)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:613)
           at scala.collection.Iterator.foreach(Iterator.scala:943)
           at scala.collection.Iterator.foreach$(Iterator.scala:943)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
           at scala.collection.IterableLike.foreach(IterableLike.scala:74)
           at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
           at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:613)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:310)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
           at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.base/java.lang.reflect.Method.invoke(Method.java:566)
           at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1032)
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:194)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:217)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91)
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1137)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1146)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   Forbidden: Principal 'root' with activated PrincipalRoles '[]' and activated 
grants via '[service_admin, catalog_admin]' is not authorized for op 
CREATE_TABLE_STAGED_WITH_WRITE_DELEGATION
   org.apache.iceberg.exceptions.ForbiddenException: Forbidden: Principal 
'root' with activated PrincipalRoles '[]' and activated grants via 
'[service_admin, catalog_admin]' is not authorized for op 
CREATE_TABLE_STAGED_WITH_WRITE_DELEGATION
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@polaris.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to