alexeykudinkin commented on code in PR #5737:
URL: https://github.com/apache/hudi/pull/5737#discussion_r890475622
##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieBaseRelation.scala:
##########
@@ -122,29 +122,39 @@ abstract class HoodieBaseRelation(val sqlContext:
SQLContext,
optParams.get(DataSourceReadOptions.TIME_TRAVEL_AS_OF_INSTANT.key)
.map(HoodieSqlCommonUtils.formatQueryInstant)
+ /**
+ * NOTE: Initialization of teh following members is coupled on purpose to
minimize amount of I/O
+ * required to fetch table's Avro and Internal schemas
+ */
protected lazy val (tableAvroSchema: Schema, internalSchema: InternalSchema)
= {
- val schemaUtil = new TableSchemaResolver(metaClient)
- val avroSchema = Try(schemaUtil.getTableAvroSchema) match {
- case Success(schema) => schema
- case Failure(e) =>
- logWarning("Failed to fetch schema from the table", e)
- // If there is no commit in the table, we can't get the schema
- // t/h [[TableSchemaResolver]], fallback to the provided
[[userSchema]] instead.
- userSchema match {
- case Some(s) => convertToAvroSchema(s)
- case _ => throw new IllegalArgumentException("User-provided schema
is required in case the table is empty")
- }
+ val schemaResolver = new TableSchemaResolver(metaClient)
+ val avroSchema: Schema = schemaSpec.map(convertToAvroSchema).getOrElse {
+ Try(schemaResolver.getTableAvroSchema) match {
+ case Success(schema) => schema
+ case Failure(e) =>
+ logError("Failed to fetch schema from the table", e)
+ throw new HoodieSchemaException("Failed to fetch schema from the
table")
+ }
}
- // try to find internalSchema
- val internalSchemaFromMeta = try {
-
schemaUtil.getTableInternalSchemaFromCommitMetadata.orElse(InternalSchema.getEmptyInternalSchema)
- } catch {
- case _: Exception => InternalSchema.getEmptyInternalSchema
+
+ val schemaEvolutionEnabled: Boolean =
optParams.getOrElse(DataSourceReadOptions.SCHEMA_EVOLUTION_ENABLED.key,
Review Comment:
@xiarixiaoyao we have to make this config mandatory now on the read-path as
well -- without it there's no way for us to know whether we should be reading
Hudi table via V1 or V2 API (currently we fallback to V1 for all use-cases
except schema evolution to make sure there's no regression in performance in
Spark SQL compared to 0.10)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]