yihua commented on code in PR #13558:
URL: https://github.com/apache/hudi/pull/13558#discussion_r2228485705


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/hudi/HoodieSqlCommonUtils.scala:
##########
@@ -378,4 +378,17 @@ object HoodieSqlCommonUtils extends SparkAdapterSupport {
       throw new HoodieException(s"Got an invalid instant ($queryInstant)")
     }
   }
+
+  /**
+   * Check if Polaris catalog is enabled in the Spark session.
+   * @param sparkSession The Spark session
+   * @return true if Polaris catalog is configured, false otherwise
+   */
+  def isUsingPolarisCatalog(sparkSession: SparkSession): Boolean = {

Review Comment:
   Summarizing what we discussed offline: When I mentioned "general check on V2 
catalog implementation", I was referring to a check on 
`DelegatingCatalogExtension` implementation that returns v2 table, i.e., 
`org.apache.spark.sql.connector.catalog.Table` introduced in Spark 3, and such 
a catalog implementation is specified through the Spark config 
`spark.sql.catalog.<spark-catalog-name>=...`, which is how 
`org.apache.polaris.spark.SparkCatalog` is plugged in.  The complication in 
Hudi Spark Implementation is that the Spark Datasource v1 is used for Hudi read 
and write, and the `HoodieCatalog` has a mixed mode of using v1 table 
(`org.apache.spark.sql.catalyst.catalog.CatalogTable`) and v2 table. So it's ok 
we defer a better check to a subsequent PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to