[
https://issues.apache.org/jira/browse/HUDI-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Y Ethan Guo updated HUDI-9634:
------------------------------
Description:
[https://github.com/apache/hudi/pull/13558#discussion_r2216883612]
There is a specific check on Polaris catalog class. In the future, can this
be change to a general check on V2 catalog implementation, instead of a
specific catalog implementation class like Polaris, so other V2 catalog
implementation class can also be used?
When I mentioned "general check on V2 catalog implementation", I was referring
to a check on {{DelegatingCatalogExtension}} implementation that returns v2
table, i.e., {{org.apache.spark.sql.connector.catalog.Table}} introduced in
Spark 3, and such a catalog implementation is specified through the Spark
config {{{}spark.sql.catalog.<spark-catalog-name>=...{}}}, which is how
{{org.apache.polaris.spark.SparkCatalog}} is plugged in.
The complication in Hudi Spark Implementation is that the Spark Datasource v1
is used for Hudi read and write, and the {{HoodieCatalog}} has a mixed mode of
using v1 table ({{{}org.apache.spark.sql.catalyst.catalog.CatalogTable{}}}) and
v2 table. We should revisit the check to see if it can be made general.
> Archival considers retaining the `the earliest retain instant` in the clean
> plan
> --------------------------------------------------------------------------------
>
> Key: HUDI-9634
> URL: https://issues.apache.org/jira/browse/HUDI-9634
> Project: Apache Hudi
> Issue Type: Improvement
> Reporter: Chaoyang Liu
> Priority: Major
> Labels: pull-request-available
> Fix For: 1.2.0
>
>
> [https://github.com/apache/hudi/pull/13558#discussion_r2216883612]
> There is a specific check on Polaris catalog class. In the future, can this
> be change to a general check on V2 catalog implementation, instead of a
> specific catalog implementation class like Polaris, so other V2 catalog
> implementation class can also be used?
> When I mentioned "general check on V2 catalog implementation", I was
> referring to a check on {{DelegatingCatalogExtension}} implementation that
> returns v2 table, i.e., {{org.apache.spark.sql.connector.catalog.Table}}
> introduced in Spark 3, and such a catalog implementation is specified through
> the Spark config {{{}spark.sql.catalog.<spark-catalog-name>=...{}}}, which is
> how {{org.apache.polaris.spark.SparkCatalog}} is plugged in.
> The complication in Hudi Spark Implementation is that the Spark Datasource v1
> is used for Hudi read and write, and the {{HoodieCatalog}} has a mixed mode
> of using v1 table
> ({{{}org.apache.spark.sql.catalyst.catalog.CatalogTable{}}}) and v2 table. We
> should revisit the check to see if it can be made general.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)