youngxinler commented on code in PR #6886:
URL: https://github.com/apache/iceberg/pull/6886#discussion_r1145687170
##########
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/SparkCatalog.java:
##########
@@ -132,6 +132,9 @@ protected Catalog buildIcebergCatalog(String name,
CaseInsensitiveStringMap opti
optionsMap.putAll(options.asCaseSensitiveMap());
optionsMap.put(CatalogProperties.APP_ID,
SparkSession.active().sparkContext().applicationId());
optionsMap.put(CatalogProperties.USER,
SparkSession.active().sparkContext().sparkUser());
+ optionsMap.putIfAbsent(
+ CatalogProperties.WAREHOUSE_LOCATION,
+ SparkSession.active().sqlContext().conf().warehousePath());
Review Comment:
of course, I agree that for HadoopCatalog accessed separately, if no
warehouse is set, an error should be thrown.
but i think `spark.sql.warehouse.dir` should have same behavior for all
catalogs in spark, so this behavior i think is ok.
if you are still concerned about the possibility of breaking the previous
behavior, I can change `spark.sql.warehouse.dir` default warehouse to only take
effect for HiveCatalog. I would like to hear your opinion.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]