n3nash commented on a change in pull request #1457: [HUDI-741] Added checks to validate Hoodie's schema evolution. URL: https://github.com/apache/incubator-hudi/pull/1457#discussion_r401966800
########## File path: hudi-common/src/main/java/org/apache/hudi/common/avro/SchemaCompatibility.java ########## @@ -0,0 +1,566 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hudi.common.avro; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; + +import org.apache.avro.AvroRuntimeException; +import org.apache.avro.Schema; +import org.apache.avro.Schema.Field; +import org.apache.avro.Schema.Type; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * NOTE: This code is copied from org.apache.avro.SchemaCompatibility and changed for HUDI use case. + * + * HUDI requires a Schema to be specified in HoodieWriteConfig and is used by the HoodieWriteClient to + * create the records. The schema is also saved in the data files (parquet format) and log files (avro format). + * Since a schema is required each time new data is ingested into a HUDI dataset, schema can be evolved over time. + * + * HUDI specific validation of schema evolution should ensure that a newer schema can be used for the dataset by + * checking that the data written using the old schema can be read using the new schema. + * + * New Schema is compatible only if: + * 1. There is no change in schema + * 2. A field has been added and it has a default value specified + * + * New Schema is incompatible if: + * 1. A field has been deleted + * 2. A field has been renamed (treated as delete + add) + * 3. A field's type has changed to be incompatible with the older type + */ +public class SchemaCompatibility { Review comment: @prashantwason Can you please mark the lines/methods you have changed with may be **MOD** for future references ? Also, can you add 1-2 lines as to why that change is needed ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
