the-other-tim-brown commented on code in PR #13964:
URL: https://github.com/apache/hudi/pull/13964#discussion_r2386437506


##########
hudi-common/src/main/java/org/apache/hudi/common/engine/RecordContext.java:
##########
@@ -92,6 +92,10 @@ public T extractDataFromRecord(HoodieRecord record, Schema 
schema, Properties pr
     return (T) record.getData();
   }
 
+  public T extractDeflatedDataFromRecord(HoodieRecord record, Schema schema, 
Properties properties) {

Review Comment:
   Similarly, can we clean this up now?



##########
hudi-hadoop-common/src/test/java/org/apache/hudi/common/functional/TestHoodieLogFormat.java:
##########
@@ -742,8 +743,14 @@ private void 
testBasicAppendAndScanMultipleFiles(ExternalSpillableMap.DiskMapTyp
         .build();
 
     List<IndexedRecord> scannedRecords = new ArrayList<>();
+    int counter = 0;

Review Comment:
   Is this used?



##########
hudi-client/hudi-flink-client/src/main/java/org/apache/hudi/client/model/HoodieFlinkRecord.java:
##########
@@ -103,7 +103,7 @@ protected Comparable<?> doGetOrderingValue(Schema 
recordSchema, Properties props
         if (recordSchema.getField(field) == null) {
           return OrderingValues.getDefault();
         }
-        return (Comparable<?>) getColumnValue(recordSchema, field, props);

Review Comment:
   Is this intentional?



##########
hudi-common/src/test/java/org/apache/hudi/common/table/read/buffer/TestKeyBasedFileGroupRecordBuffer.java:
##########
@@ -233,7 +235,9 @@ void readWithCommitTimeOrderingWithRecords() throws 
IOException {
         testRecord5, testRecord6).iterator()));
 
     List<IndexedRecord> actualRecords = 
getActualRecords(fileGroupRecordBuffer);
-    assertEquals(Arrays.asList(testRecord1UpdateWithSameTime, 
testRecord2Update, testRecord3Update, testRecord4EarlierUpdate, testRecord7), 
actualRecords);
+    assertEquals(Arrays.asList(testRecord1UpdateWithSameTime, 
testRecord2Update,

Review Comment:
   Instead of `Arrays.asList(..).stream()``` you can just use `Stream.of(...)`



##########
hudi-hadoop-common/src/test/java/org/apache/hudi/common/functional/TestHoodieLogFormat.java:
##########
@@ -742,8 +743,14 @@ private void 
testBasicAppendAndScanMultipleFiles(ExternalSpillableMap.DiskMapTyp
         .build();
 
     List<IndexedRecord> scannedRecords = new ArrayList<>();
+    int counter = 0;
     for (HoodieRecord record : scanner) {
-      scannedRecords.add(record.toIndexedRecord(schema, 
CollectionUtils.emptyProps()).get().getData());
+      Object data = record.toIndexedRecord(schema, 
CollectionUtils.emptyProps()).get().getData();
+      if (data instanceof SerializableIndexedRecord) {
+        
scannedRecords.add((GenericRecord)((SerializableIndexedRecord)data).getData());
+      } else {
+        scannedRecords.add((GenericRecord)data);

Review Comment:
   Style: Add space after casting



##########
hudi-common/src/main/java/org/apache/hudi/avro/AvroRecordContext.java:
##########
@@ -136,6 +136,15 @@ public IndexedRecord extractDataFromRecord(HoodieRecord 
record, Schema schema, P
     }
   }
 
+  @Override
+  public IndexedRecord extractDeflatedDataFromRecord(HoodieRecord record, 
Schema schema, Properties properties) {

Review Comment:
   bump on this



##########
hudi-common/src/main/java/org/apache/hudi/common/table/read/BufferedRecords.java:
##########
@@ -44,7 +44,18 @@ public static <T> BufferedRecord<T> 
fromHoodieRecord(HoodieRecord record, Schema
     T data = recordContext.extractDataFromRecord(record, schema, props);
     String recordKey = hoodieKey == null ? recordContext.getRecordKey(data, 
schema) : hoodieKey.getRecordKey();
     Integer schemaId = recordContext.encodeAvroSchema(schema);
-    Comparable orderingValue = record.getOrderingValue(schema, props, 
orderingFields);
+    Comparable orderingValue = record.getCachedOrderingValue() != null ?  
record.getCachedOrderingValue() : recordContext.getOrderingValue(data, schema, 
orderingFields);

Review Comment:
   The `record.getOrderingValue` should give you the same value and handle the 
caching of the result as well.



##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/index/HoodieIndexUtils.java:
##########
@@ -536,6 +536,8 @@ public static <R> HoodieData<HoodieRecord<R>> 
mergeForPartitionUpdatesAndDeletio
     // define the buffered record merger.
     TypedProperties properties = 
readerContext.getMergeProps(updatedConfig.getProps());
     RecordContext<R> incomingRecordContext = readerContext.getRecordContext();
+
+

Review Comment:
   nitpick: remove the spacing changes in this file



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to