cshuo commented on code in PR #13225:
URL: https://github.com/apache/hudi/pull/13225#discussion_r2061317968
##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/table/format/FlinkRowDataReaderContext.java:
##########
@@ -148,35 +149,26 @@ public String getRecordKey(RowData record, Schema schema)
{
}
@Override
- public HoodieRecord<RowData> constructHoodieRecord(Option<RowData>
recordOption, Map<String, Object> metadataMap) {
- HoodieKey hoodieKey = new HoodieKey(
- (String) metadataMap.get(INTERNAL_META_RECORD_KEY),
- (String) metadataMap.get(INTERNAL_META_PARTITION_PATH));
- RowData rowData = recordOption.get();
+ public HoodieRecord<RowData> constructHoodieRecord(BufferedRecord<RowData>
bufferedRecord) {
+ HoodieKey hoodieKey = new HoodieKey(bufferedRecord.getRecordKey(), null);
// delete record
- if (recordOption.isEmpty()) {
- Comparable orderingValue;
- if (metadataMap.containsKey(INTERNAL_META_ORDERING_FIELD)) {
- orderingValue = (Comparable)
metadataMap.get(INTERNAL_META_ORDERING_FIELD);
- } else {
- throw new HoodieException("There should be ordering value in
metadataMap.");
- }
+ if (bufferedRecord.isDelete()) {
+ Comparable orderingValue = bufferedRecord.getOrderingValue();
return new HoodieEmptyRecord<>(hoodieKey, HoodieOperation.DELETE,
orderingValue, HoodieRecord.HoodieRecordType.FLINK);
}
- return new HoodieFlinkRecord(hoodieKey, rowData);
+ return new HoodieFlinkRecord(hoodieKey, bufferedRecord.getRecord());
Review Comment:
Ok, we can do it now that there are ordering value in `BufferedRecord`.
##########
hudi-client/hudi-spark-client/src/main/scala/org/apache/hudi/BaseSparkInternalRowReaderContext.java:
##########
@@ -112,6 +112,15 @@ public InternalRow seal(InternalRow internalRow) {
return internalRow.copy();
}
+ @Override
+ public InternalRow toBinaryRow(Schema schema, InternalRow internalRow) {
+ if (internalRow instanceof UnsafeRow) {
+ return internalRow;
+ }
+ final UnsafeProjection unsafeProjection =
HoodieInternalRowUtils.getCachedUnsafeProjection(schema);
Review Comment:
Thks for reminding, `schema` here is decoded from
`HoodieReaderContext#decodeAvroSchema`, which is already interned by
`AvroSchemaCache#intern` introduced in #12949.
##########
hudi-client/hudi-flink-client/src/main/java/org/apache/hudi/client/model/HoodieFlinkRecord.java:
##########
@@ -170,11 +166,11 @@ public HoodieRecord prependMetaFields(Schema
recordSchema, Schema targetSchema,
@Override
public HoodieRecord updateMetaField(Schema recordSchema, int ordinal, String
value) {
-
ValidationUtils.checkArgument(recordSchema.getField(RECORD_KEY_METADATA_FIELD)
!= null,
- "The record is expected to contain metadata fields.");
- GenericRowData rowData = (GenericRowData) getData();
- rowData.setField(ordinal, StringData.fromString(value));
- return this;
+ String[] metaVals = new String[HoodieRecord.HOODIE_META_COLUMNS.size()];
+ metaVals[ordinal] = value;
+ boolean withOperation = recordSchema.getField(OPERATION_METADATA_FIELD) !=
null;
+ RowData rowData = new HoodieRowDataWithPartialMetaFields(metaVals,
Collections.singleton(ordinal), getData(), withOperation);
Review Comment:
Yes, all meta fields will be kept, but only `FILENAME_METADATA_FIELD` is
updated per record during compaction.
##########
hudi-common/src/main/java/org/apache/hudi/common/model/HoodieRecord.java:
##########
@@ -211,7 +213,14 @@ public HoodieOperation getOperation() {
return operation;
}
- public abstract Comparable<?> getOrderingValue(Schema recordSchema,
Properties props);
+ public Comparable<?> getOrderingValue(Schema recordSchema, Properties props)
{
+ if (orderingValue == null) {
+ orderingValue = doGetOrderingValue(recordSchema, props);
+ }
+ return orderingValue;
+ }
+
+ protected abstract Comparable<?> doGetOrderingValue(Schema recordSchema,
Properties props);
Review Comment:
will add some doc.
##########
hudi-common/src/main/java/org/apache/hudi/avro/HoodieAvroReaderContext.java:
##########
@@ -160,24 +161,27 @@ public String getRecordKey(IndexedRecord record, Schema
schema) {
}
@Override
- public HoodieRecord constructHoodieRecord(
- Option<IndexedRecord> recordOpt,
- Map<String, Object> metadataMap) {
- if (!recordOpt.isPresent()) {
+ public HoodieRecord<IndexedRecord>
constructHoodieRecord(BufferedRecord<IndexedRecord> bufferedRecord) {
+ if (bufferedRecord.isDelete()) {
return SpillableMapUtils.generateEmptyPayload(
- (String) metadataMap.get(INTERNAL_META_RECORD_KEY),
- (String) metadataMap.get(INTERNAL_META_PARTITION_PATH),
- (Comparable<?>) metadataMap.get(INTERNAL_META_ORDERING_FIELD),
+ bufferedRecord.getRecordKey(),
+ null,
Review Comment:
see more context herer:
https://github.com/apache/hudi/pull/13213#discussion_r2056133806
##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/common/model/HoodieSparkRecord.java:
##########
@@ -89,6 +89,8 @@ public class HoodieSparkRecord extends
HoodieRecord<InternalRow> {
*/
private final transient StructType schema;
+ private transient Comparable<?> orderingValue;
Review Comment:
Yes, this field is added by mistake during rebasing & resolve
conflicts..will remove it.
##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/table/format/FlinkRowDataReaderContext.java:
##########
@@ -190,6 +182,18 @@ public RowData seal(RowData rowData) {
return rowDataSerializer.copy(rowData);
}
+ @Override
+ public RowData toBinaryRow(Schema avroSchema, RowData record) {
+ if (record instanceof BinaryRowData) {
+ return record;
+ }
+ if (rowDataSerializer == null) {
+ RowType requiredRowType = (RowType)
RowDataAvroQueryContexts.fromAvroSchema(getSchemaHandler().getRequiredSchema()).getRowType().getLogicalType();
Review Comment:
Yes, schema per record should be used.
##########
hudi-common/src/main/java/org/apache/hudi/common/table/read/BufferedRecord.java:
##########
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hudi.common.table.read;
+
+import org.apache.hudi.common.engine.HoodieReaderContext;
+import org.apache.hudi.common.model.DeleteRecord;
+import org.apache.hudi.common.model.HoodieKey;
+import org.apache.hudi.common.model.HoodieRecord;
+import org.apache.hudi.common.util.Option;
+import org.apache.hudi.exception.HoodieException;
+
+import org.apache.avro.Schema;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.Properties;
+
+import static org.apache.hudi.common.model.HoodieRecord.DEFAULT_ORDERING_VALUE;
+
+/**
+ * Buffered Record used by file group reader.
+ *
+ * @param <T> The type of the engine specific row.
+ */
+public class BufferedRecord<T> implements Serializable {
Review Comment:
`BufferedRecord` is introduced for merging and caching purpose in file group
reader:
1) there exists different apis between `BufferedRecord` and `HoodieRecord`,
these changes are specific for fg reader, and may not general enough to
`HoodieRecord`, e.g., `getSchemaId`, `toBinary`.
2) `BufferedRecord` will be cached and serialized into
`ExternalSpillableMap`, and it's known that the perf bottleneck of compaction
lies in the spilling of `ExternalSpillableMap`, so we tries to make the caching
record simple and cleaner enough to save more space and reduce spilling. We've
run a microbenchmark for the PR, and 50%+ memory for `ExternalSpillableMap` are
saved.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]