MonkeyCanCode commented on code in PR #3591:
URL: https://github.com/apache/polaris/pull/3591#discussion_r2738524990


##########
getting-started/ozone/README.md:
##########
@@ -48,6 +48,7 @@ bin/spark-sql \
     --conf spark.sql.catalog.polaris.token-refresh-enabled=false \
     --conf spark.sql.catalog.polaris.warehouse=quickstart_catalog \
     --conf spark.sql.catalog.polaris.scope=PRINCIPAL_ROLE:ALL \
+    --conf spark.sql.catalog.polaris.header.X-Iceberg-Access-Delegation="" \

Review Comment:
   So it is working for me (which is how I validated my PR last night):
   ```
   ➜  spark-3.5.7-bin-hadoop3 bin/spark-sql \
       --packages 
org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.9.0,org.apache.iceberg:iceberg-aws-bundle:1.9.0
 \
       --conf 
spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
 \
       --conf spark.sql.catalog.polaris=org.apache.iceberg.spark.SparkCatalog \
       --conf spark.sql.catalog.polaris.type=rest \
       --conf spark.sql.catalog.polaris.uri=http://localhost:8181/api/catalog \
       --conf spark.sql.catalog.polaris.token-refresh-enabled=false \
       --conf spark.sql.catalog.polaris.warehouse=quickstart_catalog \
       --conf spark.sql.catalog.polaris.scope=PRINCIPAL_ROLE:ALL \
       --conf spark.sql.catalog.polaris.credential=root:s3cr3t \
       --conf spark.sql.catalog.polaris.client.region=us-west-2 \
       --conf spark.sql.catalog.polaris.s3.access-key-id=polaris_root \
       --conf spark.sql.catalog.polaris.s3.secret-access-key=polaris_pass
   :: loading settings :: url = 
jar:file:/Users/yong/Downloads/spark-3.5.7-bin-hadoop3/jars/ivy-2.5.1.jar!/org/apache/ivy/core/settings/ivysettings.xml
   Ivy Default Cache set to: /Users/yong/.ivy2/cache
   The jars for the packages stored in: /Users/yong/.ivy2/jars
   org.apache.iceberg#iceberg-spark-runtime-3.5_2.12 added as a dependency
   org.apache.iceberg#iceberg-aws-bundle added as a dependency
   :: resolving dependencies :: 
org.apache.spark#spark-submit-parent-89dd8f45-7ca0-4f4b-b3e5-aff97c570bb1;1.0
        confs: [default]
        found org.apache.iceberg#iceberg-spark-runtime-3.5_2.12;1.9.0 in central
        found org.apache.iceberg#iceberg-aws-bundle;1.9.0 in central
   :: resolution report :: resolve 76ms :: artifacts dl 3ms
        :: modules in use:
        org.apache.iceberg#iceberg-aws-bundle;1.9.0 from central in [default]
        org.apache.iceberg#iceberg-spark-runtime-3.5_2.12;1.9.0 from central in 
[default]
        ---------------------------------------------------------------------
        |                  |            modules            ||   artifacts   |
        |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
        ---------------------------------------------------------------------
        |      default     |   2   |   0   |   0   |   0   ||   2   |   0   |
        ---------------------------------------------------------------------
   :: retrieving :: 
org.apache.spark#spark-submit-parent-89dd8f45-7ca0-4f4b-b3e5-aff97c570bb1
        confs: [default]
        0 artifacts copied, 2 already retrieved (0kB/3ms)
   26/01/28 14:12:09 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
   Setting default log level to "WARN".
   To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
   26/01/28 14:12:11 WARN HiveConf: HiveConf of name hive.stats.jdbc.timeout 
does not exist
   26/01/28 14:12:11 WARN HiveConf: HiveConf of name hive.stats.retries.wait 
does not exist
   26/01/28 14:12:11 WARN ObjectStore: Version information not found in 
metastore. hive.metastore.schema.verification is not enabled so recording the 
schema version 2.3.0
   26/01/28 14:12:11 WARN ObjectStore: setMetaStoreSchemaVersion called but 
recording version is disabled: version = 2.3.0, comment = Set by MetaStore 
[email protected]
   Spark Web UI available at http://192.168.1.177:4040
   Spark master: local[*], Application Id: local-1769631130381
   spark-sql (default)> use polaris;
   26/01/28 14:12:13 WARN AuthManagers: Inferring rest.auth.type=oauth2 since 
property credential was provided. Please explicitly set rest.auth.type to avoid 
this warning.
   26/01/28 14:12:13 WARN OAuth2Manager: Iceberg REST client is missing the 
OAuth2 server URI configuration and defaults to 
http://localhost:8181/api/catalog/v1/oauth/tokens. This automatic fallback will 
be removed in a future Iceberg release.It is recommended to configure the 
OAuth2 endpoint using the 'oauth2-server-uri' property to be prepared. This 
warning will disappear if the OAuth2 endpoint is explicitly configured. See 
https://github.com/apache/iceberg/issues/10537
   26/01/28 14:12:14 WARN ObjectStore: Failed to get database global_temp, 
returning NoSuchObjectException
   Time taken: 0.783 seconds
   spark-sql ()> create namespace ns;
   Time taken: 0.442 seconds
   spark-sql ()> create table ns.t1 as select 'abc';
   26/01/28 14:12:24 WARN GarbageCollectionMetrics: To enable non-built-in 
garbage collector(s) List(G1 Concurrent GC), users should configure it(them) to 
spark.eventLog.gcMetrics.youngGenerationGarbageCollectors or 
spark.eventLog.gcMetrics.oldGenerationGarbageCollectors
   Time taken: 4.285 seconds
   spark-sql ()> select * from ns.t1;
   abc
   Time taken: 0.579 seconds, Fetched 1 row(s)
   spark-sql ()>
   ```
   
   Is it possible the polaris image can be different on your local setup as we 
are just pointing to latest? Based on my understanding, this is why we have 
`stsUnavailable` set to true in this docker compose. I also tried the build the 
local latest 1.4.0-incubating and that still work as well. I am suspected some 
local changes to your latest polaris image. Do you mind double check by either 
remove the local image or rebuild with main branch?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to