pratyakshsharma commented on code in PR #4910:
URL: https://github.com/apache/hudi/pull/4910#discussion_r1015042878


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/action/commit/HoodieMergeHelper.java:
##########
@@ -78,12 +90,41 @@ public void runMerge(HoodieTable<T, 
HoodieData<HoodieRecord<T>>, HoodieData<Hood
 
     BoundedInMemoryExecutor<GenericRecord, GenericRecord, Void> wrapper = null;
     HoodieFileReader<GenericRecord> reader = 
HoodieFileReaderFactory.getFileReader(cfgForHoodieFile, 
mergeHandle.getOldFilePath());
+
+    Option<InternalSchema> querySchemaOpt = 
SerDeHelper.fromJson(table.getConfig().getInternalSchema());
+    boolean needToReWriteRecord = false;
+    // TODO support bootstrap
+    if (querySchemaOpt.isPresent() && 
!baseFile.getBootstrapBaseFile().isPresent()) {
+      // check implicitly add columns, and position reorder(spark sql may 
change cols order)
+      InternalSchema querySchema = 
AvroSchemaEvolutionUtils.evolveSchemaFromNewAvroSchema(readSchema, 
querySchemaOpt.get(), true);
+      long commitInstantTime = 
Long.valueOf(FSUtils.getCommitTime(mergeHandle.getOldFilePath().getName()));
+      InternalSchema writeInternalSchema = 
InternalSchemaCache.searchSchemaAndCache(commitInstantTime, 
table.getMetaClient(), table.getConfig().getInternalSchemaCacheEnable());
+      if (writeInternalSchema.isEmptySchema()) {
+        throw new HoodieException(String.format("cannot find file schema for 
current commit %s", commitInstantTime));
+      }
+      List<String> colNamesFromQuerySchema = querySchema.getAllColsFullName();
+      List<String> colNamesFromWriteSchema = 
writeInternalSchema.getAllColsFullName();
+      List<String> sameCols = colNamesFromWriteSchema.stream()
+              .filter(f -> colNamesFromQuerySchema.contains(f)
+                      && writeInternalSchema.findIdByName(f) == 
querySchema.findIdByName(f)
+                      && writeInternalSchema.findIdByName(f) != -1
+                      && 
writeInternalSchema.findType(writeInternalSchema.findIdByName(f)).equals(querySchema.findType(writeInternalSchema.findIdByName(f)))).collect(Collectors.toList());
+      readSchema = AvroInternalSchemaConverter.convert(new 
InternalSchemaMerger(writeInternalSchema, querySchema, true, 
false).mergeSchema(), readSchema.getName());
+      Schema writeSchemaFromFile = 
AvroInternalSchemaConverter.convert(writeInternalSchema, readSchema.getName());

Review Comment:
   So let me reframe my question. On line 99, we only take care of addition of 
new columns in incoming schema for combining latest schema from commit file 
(S1) and incoming schema (S2). After combining them, we populate the combined 
schema to the variable querySchema (S3). As I understand, writeInternalSchema 
(S4) variable contains the same schema as S1. 
   Now on line 112, we merge S3 and S4 to take care of column type change and 
column renames. We finally convert this from InternalSchema to avroSchema using 
`convert` call. 
   
   Please correct me if I am wrong in the above explanation. Now I have below 
questions -  
   1. If S4 is same as S1, why do we even need this variable 
`writeInternalSchema`? We can simply use S1 throughout the if block. 
   2. Are we not supporting deletion of columns yet? Can you point me to the 
lines of code or the method where we are taking care of deletion?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to