gh-yzou commented on code in PR #1862: URL: https://github.com/apache/polaris/pull/1862#discussion_r2193312774
########## plugins/spark/v3.5/spark/src/main/java/org/apache/polaris/spark/SparkCatalog.java: ########## @@ -263,25 +279,39 @@ public String[][] listNamespaces(String[] namespace) throws NoSuchNamespaceExcep @Override public Map<String, String> loadNamespaceMetadata(String[] namespace) throws NoSuchNamespaceException { - return this.icebergsSparkCatalog.loadNamespaceMetadata(namespace); + Map<String, String> metadata = this.icebergsSparkCatalog.loadNamespaceMetadata(namespace); + if (PolarisCatalogUtils.isHudiExtensionEnabled()) { + HudiCatalogUtils.loadNamespaceMetadata(namespace, metadata); + } + return metadata; } @Override public void createNamespace(String[] namespace, Map<String, String> metadata) throws NamespaceAlreadyExistsException { this.icebergsSparkCatalog.createNamespace(namespace, metadata); + if (PolarisCatalogUtils.isHudiExtensionEnabled()) { + HudiCatalogUtils.createNamespace(namespace, metadata); + } } @Override public void alterNamespace(String[] namespace, NamespaceChange... changes) throws NoSuchNamespaceException { this.icebergsSparkCatalog.alterNamespace(namespace, changes); + if (PolarisCatalogUtils.isHudiExtensionEnabled()) { + HudiCatalogUtils.alterNamespace(namespace, changes); Review Comment: For the CatalogTableCreation, maybe you can follow up on how hudi create the catalog table here https://github.com/apache/hudi/blob/master/hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/hudi/catalog/HoodieCatalog.scala#L293. ########## plugins/spark/v3.5/spark/src/main/java/org/apache/polaris/spark/SparkCatalog.java: ########## @@ -263,25 +279,39 @@ public String[][] listNamespaces(String[] namespace) throws NoSuchNamespaceExcep @Override public Map<String, String> loadNamespaceMetadata(String[] namespace) throws NoSuchNamespaceException { - return this.icebergsSparkCatalog.loadNamespaceMetadata(namespace); + Map<String, String> metadata = this.icebergsSparkCatalog.loadNamespaceMetadata(namespace); + if (PolarisCatalogUtils.isHudiExtensionEnabled()) { + HudiCatalogUtils.loadNamespaceMetadata(namespace, metadata); + } + return metadata; } @Override public void createNamespace(String[] namespace, Map<String, String> metadata) throws NamespaceAlreadyExistsException { this.icebergsSparkCatalog.createNamespace(namespace, metadata); + if (PolarisCatalogUtils.isHudiExtensionEnabled()) { + HudiCatalogUtils.createNamespace(namespace, metadata); + } } @Override public void alterNamespace(String[] namespace, NamespaceChange... changes) throws NoSuchNamespaceException { this.icebergsSparkCatalog.alterNamespace(namespace, changes); + if (PolarisCatalogUtils.isHudiExtensionEnabled()) { + HudiCatalogUtils.alterNamespace(namespace, changes); Review Comment: As for createTable, i think the real problem comes from here https://github.com/apache/hudi/blob/master/hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/hudi/command/CreateHoodieTableCommand.scala#L198 Similar as delta, when we are using rest catalog as delegation, i think we should call into the catalogPlugin for table creation, instead of spark session catalog. https://github.com/delta-io/delta/blob/2d89954008b6c53e49744f09435136c5c63b9f2c/spark/src/main/scala/org/apache/spark/sql/delta/catalog/DeltaCatalog.scala#L218 Delta today triggers a special check for unity catalog here https://github.com/delta-io/delta/blob/2d89954008b6c53e49744f09435136c5c63b9f2c/spark/src/main/scala/org/apache/spark/sql/delta/catalog/DeltaCatalog.scala#L77, One way I am thinking is that we can introduce a special flag for Polaris SparkCatalog to represent that a third party catalog plugin is used, and then do similar thing as DeltaCatalog -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@polaris.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org