biao-lvwan commented on issue #11862:
URL: https://github.com/apache/hudi/issues/11862#issuecomment-2323378976
> You declare the table as cow, cow does not generate logs so the compaction
is needless.
>
> ```java
> 'table.type' = 'COPY_ON_WRITE'
> ```
Sorry, the screenshot is wrong
CREATE TABLE test_hudi_flink9 (
id int PRIMARY KEY NOT ENFORCED,
name VARCHAR(10),
price int,
ts int,
dt VARCHAR(10)
)
PARTITIONED BY (dt)
WITH (
'connector' = 'hudi',
'path' = 's3a://ceshi/hudi9/',
'table.type' = 'MERGE_ON_READ',
'hoodie.datasource.write.keygenerator.class' =
'org.apache.hudi.keygen.ComplexAvroKeyGenerator',
'hoodie.datasource.write.recordkey.field' = 'id',
'hoodie.datasource.write.hive_style_partitioning' = 'true',
'changelog.enabled' = 'true',
'compaction.async.enabled' = 'true',
'compaction.delta_commits' ='2',
'compaction.trigger.strategy' = 'num_commits',
'hive_sync.enable'='true',
'hive_sync.table'='t_hdm',
'hive_sync.db'='default',
'hive_sync.mode' = 'hms',
'hive_sync.metastore.uris' = 'thrift://hive-metastore:9083'
);
I use MERGE_ON_READ,Whether using s3 or minio ,Will not be automatically
compressed and merged into storage,What may be the cause?

--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]