the-other-tim-brown commented on code in PR #9743:
URL: https://github.com/apache/hudi/pull/9743#discussion_r1357635961
##########
hudi-common/src/main/java/org/apache/hudi/avro/HoodieAvroUtils.java:
##########
@@ -1131,6 +1196,25 @@ private static Schema getActualSchemaFromUnion(Schema
schema, Object data) {
return actualSchema;
}
+ private static Schema getActualSchemaFromUnion(Schema schema) {
+ Schema actualSchema;
+ if (schema.getType() != UNION) {
+ return schema;
+ }
+ if (schema.getTypes().size() == 2
+ && schema.getTypes().get(0).getType() == Schema.Type.NULL) {
+ actualSchema = schema.getTypes().get(1);
+ } else if (schema.getTypes().size() == 2
+ && schema.getTypes().get(1).getType() == Schema.Type.NULL) {
+ actualSchema = schema.getTypes().get(0);
+ } else if (schema.getTypes().size() == 1) {
+ actualSchema = schema.getTypes().get(0);
Review Comment:
Is there any way to share this logic with the method above?
##########
hudi-common/src/main/java/org/apache/hudi/common/config/HoodieCommonConfig.java:
##########
@@ -71,6 +71,14 @@ public class HoodieCommonConfig extends HoodieConfig {
+ " operation will fail schema compatibility check. Set this option
to true will make the newly added "
+ " column nullable to successfully complete the write operation.");
+ public static final ConfigProperty<String> ADD_NULL_FOR_DELETED_COLUMNS =
ConfigProperty
Review Comment:
What would the behavior be when this is false and schema evolution is
enabled? Is there an option where it would auto-drop the column in the target
table?
##########
hudi-common/src/main/java/org/apache/hudi/internal/schema/utils/AvroSchemaEvolutionUtils.java:
##########
@@ -111,17 +111,21 @@ public static InternalSchema reconcileSchema(Schema
incomingSchema, InternalSche
return
SchemaChangeUtils.applyTableChanges2Schema(internalSchemaAfterAddColumns,
typeChange);
}
+ public static Schema reconcileSchema(Schema incomingSchema, Schema
oldTableSchema) {
+ return convert(reconcileSchema(incomingSchema, convert(oldTableSchema)),
oldTableSchema.getFullName());
+ }
+
/**
- * Reconciles nullability requirements b/w {@code source} and {@code target}
schemas,
+ * Reconciles nullability and datatype requirements b/w {@code source} and
{@code target} schemas,
* by adjusting these of the {@code source} schema to be in-line with the
ones of the
* {@code target} one
*
* @param sourceSchema source schema that needs reconciliation
* @param targetSchema target schema that source schema will be reconciled
against
* @param opts config options
- * @return schema (based off {@code source} one) that has nullability
constraints reconciled
+ * @return schema (based off {@code source} one) that has nullability
constraints and datatypes reconciled
*/
- public static Schema reconcileNullability(Schema sourceSchema, Schema
targetSchema, Map<String, String> opts) {
+ public static Schema reconcileSchemaRequirements(Schema sourceSchema, Schema
targetSchema, Map<String, String> opts) {
Review Comment:
Do we have unit testing on this?
##########
hudi-common/src/main/java/org/apache/hudi/internal/schema/convert/AvroInternalSchemaConverter.java:
##########
@@ -68,6 +68,17 @@ public static Schema convert(InternalSchema internalSchema,
String name) {
return buildAvroSchemaFromInternalSchema(internalSchema, name);
}
+ /**
+ * Convert avro Schema to avro Schema.
+ *
+ * @param internalSchema internal schema.
+ * @param name the record name.
+ * @return an avro Schema.
+ */
+ public static Schema fixNullOrdering(Schema schema) {
Review Comment:
Why do we need to do this conversion?
##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/streamer/StreamSync.java:
##########
@@ -661,6 +652,35 @@ private Pair<SchemaProvider, Pair<String,
JavaRDD<HoodieRecord>>> fetchFromSourc
return Pair.of(schemaProvider, Pair.of(checkpointStr, records));
}
+ /**
+ * Apply schema reconcile and schema evolution rules(schema on read) and
generate new target schema provider.
+ *
+ * @param incomingSchema schema of the source data
+ * @param sourceSchemaProvider Source schema provider.
+ * @return the SchemaProvider that can be used as writer schema.
+ */
+ private SchemaProvider getDeducedSchemaProvider(Schema incomingSchema,
SchemaProvider sourceSchemaProvider) {
Review Comment:
Let's try to step through a case that can hit that path today.
##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieSchemaUtils.scala:
##########
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hudi
+
+import org.apache.hudi.common.config.HoodieConfig
+import org.apache.hudi.common.table.{HoodieTableMetaClient,
TableSchemaResolver}
+import org.apache.hudi.internal.schema.InternalSchema
+
+/**
+ * Util methods for Schema evolution in Hudi
+ */
+object HoodieSchemaUtils {
+ /**
+ * get latest internalSchema from table
+ *
+ * @param config instance of {@link HoodieConfig}
+ * @param tableMetaClient instance of HoodieTableMetaClient
+ * @return Pair of(boolean, table schema), where first entry will be true
only if schema conversion is required.
Review Comment:
Return type is not accurate here? I think the name of this is a bit
misleading since you're only getting the latest table internal schema if schema
evolution is enabled. This can be confusing to callers.
Also do we want to introduce another class related to schemas?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]