alexeykudinkin commented on code in PR #7333:
URL: https://github.com/apache/hudi/pull/7333#discussion_r1036446066
##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieSparkSqlWriter.scala:
##########
@@ -200,16 +198,20 @@ object HoodieSparkSqlWriter {
.getOrElse(getAvroRecordNameAndNamespace(tblName))
val sourceSchema = convertStructTypeToAvroSchema(df.schema,
avroRecordName, avroRecordNamespace)
- val internalSchemaOpt = getLatestTableInternalSchema(fs, basePath,
sparkContext).orElse {
- val schemaEvolutionEnabled =
parameters.getOrDefault(DataSourceReadOptions.SCHEMA_EVOLUTION_ENABLED.key,
-
DataSourceReadOptions.SCHEMA_EVOLUTION_ENABLED.defaultValue.toString).toBoolean
- // In case we need to reconcile the schema and schema evolution is
enabled,
- // we will force-apply schema evolution to the writer's schema
- if (shouldReconcileSchema && schemaEvolutionEnabled) {
- Some(AvroInternalSchemaConverter.convert(sourceSchema))
- } else {
- None
+ val schemaEvolutionEnabled =
parameters.getOrDefault(DataSourceReadOptions.SCHEMA_EVOLUTION_ENABLED.key,
Review Comment:
`getLatestTableInternalSchema` is an internal method for SparkSqlWriter it
could accept HoodieWriteConfig w/ no issues. Essentially, we want to abstract
this check w/in that method to avoid duplicating it in multiple places
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]