raviMoengage commented on issue #5565:
URL: https://github.com/apache/hudi/issues/5565#issuecomment-1126034047

   Hi @nsivabalan,
   
   Yes, I am using forEachBatch for writing in hudi format same as above. True 
that with `spark-data source writes, async table services won't kick in.`
   
   As per 
[FAQ](https://hudi.apache.org/learn/faq/#what-options-do-i-have-for-asynchronousoffline-compactions-on-mor-dataset)
   ```
   Alternately, from 0.11.0, to avoid dependency on lock providers, scheduling 
alone can be done inline by regular writer using the config 
hoodie.compact.schedule.inline. 
   And compaction execution can be done offline by periodically triggering the 
Hudi Compactor Utility or Hudi CLI.
   ```
   I tried this `hoodie.compact.schedule.inline = true `, but there is no 
compaction getting scheduled.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to