leesf commented on code in PR #5737:
URL: https://github.com/apache/hudi/pull/5737#discussion_r890746379
##########
hudi-spark-datasource/hudi-spark3/src/main/scala/org/apache/spark/sql/hudi/catalog/HoodieCatalog.scala:
##########
@@ -105,12 +106,30 @@ class HoodieCatalog extends DelegatingCatalogExtension
case _ =>
catalogTable0
}
- HoodieInternalV2Table(
+
+ val v2Table = HoodieInternalV2Table(
spark = spark,
path = catalogTable.location.toString,
catalogTable = Some(catalogTable),
tableIdentifier = Some(ident.toString))
- case o => o
+
+ val schemaEvolutionEnabled: Boolean =
spark.sessionState.conf.getConfString(DataSourceReadOptions.SCHEMA_EVOLUTION_ENABLED.key,
+
DataSourceReadOptions.SCHEMA_EVOLUTION_ENABLED.defaultValue.toString).toBoolean
+
+ // NOTE: PLEASE READ CAREFULLY
+ //
+ // Since Hudi relations don't currently implement DS V2 Read API, we
by default fallback to V1 here.
+ // Such fallback will have considerable performance impact, therefore
it's only performed in cases
+ // where V2 API have to be used. Currently only such use-case is using
of Schema Evolution feature
+ //
+ // Check out HUDI-4178 for more details
+ if (schemaEvolutionEnabled) {
+ v2Table
+ } else {
+ v2Table.v1TableWrapper
Review Comment:
Here still have a problem that v2Table.v1TableWrapper returns V1Table which
do not support reading/writing, what do you mean by `Catalog still exposes the
methods to write into Hudi tables`?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]