geserdugarov commented on code in PR #12516:
URL: https://github.com/apache/hudi/pull/12516#discussion_r1891906063
##########
hudi-client/hudi-flink-client/src/main/java/org/apache/hudi/io/storage/row/HoodieRowDataCreateHandle.java:
##########
@@ -90,6 +91,7 @@ public HoodieRowDataCreateHandle(HoodieTable table,
HoodieWriteConfig writeConfi
this.fileId = fileId;
this.newRecordLocation = new HoodieRecordLocation(instantTime, fileId);
this.preserveHoodieMetadata = preserveHoodieMetadata;
+ this.skipMetadataWrite = skipMetadataWrite;
Review Comment:
User sets value for `hoodie.populate.meta.fields` option, which is `true` by
default. And in description for this config, "append only/immutable data" is
mentioned as use case:
https://github.com/apache/hudi/blob/9da3221a79465f3326ae3ac206b08d60864ddcaa/hudi-common/src/main/java/org/apache/hudi/common/table/HoodieTableConfig.java#L261-L265
For this reason, in this MR I supported `hoodie.populate.meta.fields` in
Flink only for append mode.
For quick check I use SQL queries like the following ones, which used for
append mode:
```SQL
CREATE TABLE hudi_debug (
id INT,
part INT,
desc STRING,
PRIMARY KEY (id) NOT ENFORCED
)
WITH (
'connector' = 'hudi',
'path' = '...',
'table.type' = 'COPY_ON_WRITE',
'write.operation' = 'insert',
'hoodie.populate.meta.fields' = 'false'
);
```
```SQL
INSERT INTO hudi_debug VALUES
(1,100,'aaa'),
(2,200,'bbb');
```
**Expected results**: there is no exceptions during
```SQL
SELECT * FROM hudi_debug;
```
and corresponding parquet files in HDFS don't contain columns with metadata.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]