alexeykudinkin commented on code in PR #5737:
URL: https://github.com/apache/hudi/pull/5737#discussion_r888469016


##########
hudi-spark-datasource/hudi-spark3/src/main/scala/org/apache/spark/sql/hudi/analysis/HoodieSpark3Analysis.scala:
##########
@@ -45,16 +45,22 @@ case class HoodieSpark3Analysis(sparkSession: SparkSession) 
extends Rule[Logical
   with SparkAdapterSupport with ProvidesHoodieConfig {
 
   override def apply(plan: LogicalPlan): LogicalPlan = 
plan.resolveOperatorsDown {
-    case dsv2 @ DataSourceV2Relation(d: HoodieInternalV2Table, _, _, _, _) =>
-      val output = dsv2.output
-      val catalogTable = if (d.catalogTable.isDefined) {
-        Some(d.v1Table)
-      } else {
-        None
-      }
-      val relation = new DefaultSource().createRelation(new 
SQLContext(sparkSession),
-        buildHoodieConfig(d.hoodieCatalogTable))
-      LogicalRelation(relation, output, catalogTable, isStreaming = false)
+    // NOTE: This step is required since Hudi relations don't currently 
implement DS V2 Read API
+    case dsv2 @ DataSourceV2Relation(tbl: HoodieInternalV2Table, _, _, _, _) =>
+      val qualifiedTableName = QualifiedTableName(tbl.v1Table.database, 
tbl.v1Table.identifier.table)
+      val catalog = sparkSession.sessionState.catalog
+
+      catalog.getCachedPlan(qualifiedTableName, () => {

Review Comment:
   Yes, the issue is that it's never invalidated. V1 actually has notion of the 
catalog, it's just it's being handled differently by V1 and V2 and so since 
we're in between those 2 worlds we can't really make use of it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to