cshuo commented on code in PR #12795:
URL: https://github.com/apache/hudi/pull/12795#discussion_r1962781877


##########
rfc/rfc-88/rfc-88.md:
##########
@@ -0,0 +1,596 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+# RFC-88: New Schema/DataType/Expression Abstractions
+
+## Proposers
+
+- @cshuo
+- @danny0405
+
+## Approvers
+- ..
+
+## Status
+
+JIRA: https://issues.apache.org/jira/browse/HUDI-8966
+
+## Abstract
+
+Hudi currently is tightly coupled with Avro, particularly in terms of basic 
data types, schema, and the internal record
+representation used in read/write paths. This coupling leads to numerous 
issues, for example, record-level unnecessary 
+Ser/De costs are introduced because of engine native row and Avro record 
converting, the data type can not be extended
+to support other complex/advanced type, such as Variant and the basic read/ 
write functionality codes cannot be effectively 
+reused among different engines. As for Expression, currently, different 
engines have their own implementation to achieve 
+pushdown optimization, which is not friendly for extending as more indices are 
introduced.
+
+This RFC aims to propose an improvement to the current Schema/Type/Expression 
abstractions, to achieve the following goals:
+* Use a native schema as the authoritative schema, and make the type system 
extensible to support or customize other types, e.g, Variant.
+* Abstract the common implementation of writer/readers and move them to 
hudi-common module, and engines just need implement getter/setters for specific 
rows(Flink RowData and Spark InternalRow).
+* Add a concentrated and sharable expression abstraction for all kinds of 
expression pushdown for all engines and integrate it deeply with the MDT 
indices.
+
+
+## Background
+### Two 'Schema's
+There exist two Schemas currently in Hudi's table management, Table schema in 
Avro format and a Hudi native `InternalSchema`. 

Review Comment:
   `InternalSchema` is firstly introduced for full schema evolution for spark. 
And currently it's used in other engine integration for schema evolution as 
well. Generally, when config `hoodie.schema.on.read.enable` is enabled, 
`InternalSchema` will be saved in commit metadata, and it's used to reconcile 
schema for writing and reading for Spark.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to