nsivabalan commented on code in PR #13964:
URL: https://github.com/apache/hudi/pull/13964#discussion_r2373691589


##########
hudi-common/src/main/java/org/apache/hudi/common/table/read/BufferedRecords.java:
##########
@@ -48,6 +48,24 @@ public static <T> BufferedRecord<T> 
fromHoodieRecord(HoodieRecord record, Schema
     return new BufferedRecord<>(recordKey, 
recordContext.convertOrderingValueToEngineType(orderingValue), data, schemaId, 
inferOperation(isDelete, record.getOperation()));
   }
 
+  public static <T> BufferedRecord<T> 
fromHoodieRecordWithDeflatedRecord(HoodieRecord record, Schema schema, 
RecordContext<T> recordContext, Properties props,
+                                                                         
String[] orderingFields, DeleteContext deleteContext) {

Review Comment:
   once we get COW write paths CI green, will remove these special handling. 



##########
hudi-common/src/main/java/org/apache/hudi/common/table/read/BufferedRecords.java:
##########
@@ -48,6 +48,24 @@ public static <T> BufferedRecord<T> 
fromHoodieRecord(HoodieRecord record, Schema
     return new BufferedRecord<>(recordKey, 
recordContext.convertOrderingValueToEngineType(orderingValue), data, schemaId, 
inferOperation(isDelete, record.getOperation()));
   }
 
+  public static <T> BufferedRecord<T> 
fromHoodieRecordWithDeflatedRecord(HoodieRecord record, Schema schema, 
RecordContext<T> recordContext, Properties props,
+                                                                         
String[] orderingFields, DeleteContext deleteContext) {
+    HoodieOperation hoodieOperation = record.getIgnoreIndexUpdate() ? 
HoodieOperation.UPDATE_BEFORE : record.getOperation();

Review Comment:
   down the line, we are using inferOperation. will check it out anyways on 
whats going on.



##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieSparkSqlWriter.scala:
##########
@@ -508,11 +508,16 @@ class HoodieSparkSqlWriterInternal {
               throw new 
UnsupportedOperationException(s"${writeConfig.getRecordMerger.getClass.getName} 
only support parquet log.")
             }
             instantTime = client.startCommit(commitActionType)
+            // if table has undergone upgrade, we need to reload table config
+            tableConfig = HoodieTableMetaClient.builder()

Review Comment:
   proper fix will be part of https://github.com/apache/hudi/pull/13979 
   



##########
hudi-common/src/main/java/org/apache/hudi/common/table/read/BufferedRecords.java:
##########
@@ -48,6 +48,24 @@ public static <T> BufferedRecord<T> 
fromHoodieRecord(HoodieRecord record, Schema
     return new BufferedRecord<>(recordKey, 
recordContext.convertOrderingValueToEngineType(orderingValue), data, schemaId, 
inferOperation(isDelete, record.getOperation()));
   }
 
+  public static <T> BufferedRecord<T> 
fromHoodieRecordWithDeflatedRecord(HoodieRecord record, Schema schema, 
RecordContext<T> recordContext, Properties props,
+                                                                         
String[] orderingFields, DeleteContext deleteContext) {
+    HoodieOperation hoodieOperation = record.getIgnoreIndexUpdate() ? 
HoodieOperation.UPDATE_BEFORE : record.getOperation();
+    boolean isDelete = record.isDelete(deleteContext, props);
+    return fromHoodieRecordWithDeflatedRecord(record, schema, recordContext, 
props, orderingFields, isDelete, hoodieOperation);
+  }
+
+  public static <T> BufferedRecord<T> 
fromHoodieRecordWithDeflatedRecord(HoodieRecord record, Schema schema, 
RecordContext<T> recordContext,

Review Comment:
   yes, I will be removing these additional methods @danny0405 . as of now, I 
was targetting to get to green CI just for COW writes and we are in green state 
now. 
   I will be looking to make this generic across the board in subsequent 
patches. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to