itdom commented on issue #8984:
URL: https://github.com/apache/hudi/issues/8984#issuecomment-2246793962

   > @ad1happy2go
   > 
   > Compactions fail with
   > 
   > java.lang.IllegalArgumentException: Earliest write inflight instant time 
must be later than compaction time. Earliest 
:[==>20230620080309158__deltacommit__INFLIGHT], Compaction scheduled at 
20230620080355689
   > 
   > 2023-06-20 08:03:55,711 INFO s3n.S3NativeFileSystem: Opening 
's3://a206760-novusdoc-s3-dev-use1/novusdoc/.hoodie/hoodie.properties' for 
reading 2023-06-20 08:03:55,741 INFO table.HoodieTableMetaClient: Finished 
Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=PARQUET) from 
s3://a206760-novusdoc-s3-dev-use1/novusdoc 2023-06-20 08:03:55,741 INFO 
table.HoodieTableMetaClient: Loading Active commit timeline for 
s3://a206760-novusdoc-s3-dev-use1/novusdoc
   > 
   > I deleted 20230620080309158.deltacommit.inflight which and 
20230620080309158.deltacommited.reqeusted and it worked. But I cant do this in 
production, We get evry second upserts through stream. Please help
   > 
   > spark-submit --packages 
org.apache.hudi:hudi-utilities-bundle_2.12:0.11.1,org.apache.spark:spark-avro_2.11:2.4.4,org.apache.hudi:hudi-spark3-bundle_2.12:0.11.1
 --verbose --driver-memory 2g --executor-memory 2g --class 
org.apache.hudi.utilities.HoodieCompactor 
/usr/lib/hudi/hudi-utilities-bundle.jar,/usr/lib/hudi/hudi-spark-bundle.jar 
--table-name novusdoc --base-path s3://a206760-novusdoc-s3-dev-use1/novusdoc 
--mode scheduleandexecute --spark-memory 2g --hoodie-conf 
hoodie.metadata.enable=false --hoodie-conf 
hoodie.compact.inline.trigger.strategy=NUM_COMMITS --hoodie-conf 
hoodie.compact.inline.max.delta.commits=50
   
   Has your problem been resolved
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to