Vsevolod3 opened a new issue, #9201:
URL: https://github.com/apache/hudi/issues/9201

   I am running a Flink (1.15.2) job in EMR (emr-6.9.0), reading records from 
Kafka and writing them to S3 using Hudi (1.13.0). The table type is MoR and 
properties for compaction are COMPACTION_ASYNC_ENABLED = true and 
COMPACTION_TRIGGER_STRATEGY = time_elapsed .
   
   ## To Reproduce
   
   Steps to reproduce the behavior:
   
   1. Submit Flink job to EMR cluster (set COMPACTION_ASYNC_ENABLED = true, 
COMPACTION_TRIGGER_STRATEGY = time_elapsed, and COMPACTION_DELTA_SECONDS = 600)
   2. Load data (not exceeding 3 commits per file ID)
   3. Wait for > 600 seconds.
   
   ### Full list of Hudi properties for reference
   ```sql
     'index.type' = 'FLINK_STATE',
     'compaction.schedule.enabled' = 'true',
     'hoodie.index.bucket.engine' = 'SIMPLE',
     'clustering.plan.strategy.sort.columns' = 'acct_id',
     'write.bucket_assign.tasks' = '3',
     'compaction.delta_seconds' = '300',
     'clustering.delta_commits' = '4',
     'clustering.plan.strategy.small.file.limit' = '600',
     'compaction.async.enabled' = 'true',
     'compaction.max_memory' = '1024',
     'hoodie.parquet.max.file.size' = '125829120',
     'read.streaming.enabled' = 'false',
     'path' = 's3://my_bucket/my_path/account/',
     'hoodie.logfile.max.size' = '1073741824',
     'hoodie.datasource.write.hive_style_partitioning' = 'true',
     'hoodie.parquet.compression.ratio' = '0.1',
     'hoodie.parquet.small.file.limit' = '104857600',
     'hoodie.bucket.index.hash.field' = 'acct_id',
     'compaction.tasks' = '3',
     'precombine.field' = 'update_ts',
     'write.task.max.size' = '4094',
     'hoodie.parquet.compression.codec' = 'snappy',
     'compaction.delta_commits' = '3',
     'clustering.tasks' = '3',
     'compaction.trigger.strategy' = 'time_elapsed',
     'hoodie.bucket.index.num.buckets' = '256',
     'read.tasks' = '3',
     'compaction.timeout.seconds' = '1200',
     'clustering.async.enabled' = 'true',
     'table.type' = 'MERGE_ON_READ',
     'metadata.compaction.delta_commits' = '10',
     'clustering.plan.strategy.max.num.groups' = '30',
     'write.tasks' = '3',
     'clustering.schedule.enabled' = 'false',
     'hoodie.logfile.data.block.format' = 'avro',
     'write.batch.size' = '4094.0',
     'write.sort.memory' = '4094'
   ```
   
   ## Expected behavior
   
   Compaction should be run after about 5 minutes of the job tasks being fully 
started.
   
   ## Environment Description
   
   * Hudi version : 0.13.0
   * Spark version : N/A (using Flink 1.15.2)
   * Hive version : tbd
   * Hadoop version : emr-6.9.0
   * Storage (HDFS/S3/GCS..) : s3
   * Running on Docker? (yes/no) : no
   
   
   **Stacktrace**
   
   No errors are logged in Flink for this.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to