koochiswathiTR opened a new issue, #8939:
URL: https://github.com/apache/hudi/issues/8939

   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   Hi, 
   We use spark streaming to ingest data, we use inline compaction and to clean 
based on number of commits,
   As the data is growing in hudi, our ingestion job is taking time in cleanup 
and compaction, We decided to run compaction and cleanup in different emr by 
disabling compaction in main job
   
   Please suggest to how to disable compaction and cleanup to stop running
   The current hudi writer we are using is
   
       HoodieCompactionConfig.INLINE_COMPACT_TRIGGER_STRATEGY.key() -> 
CompactionTriggerStrategy.TIME_ELAPSED.name,
       HoodieCompactionConfig.INLINE_COMPACT_TIME_DELTA_SECONDS.key() -> 
String.valueOf(60 * 60),
       HoodieCompactionConfig.CLEANER_POLICY.key() -> 
HoodieCleaningPolicy.KEEP_LATEST_COMMITS.name(),
       HoodieCompactionConfig.CLEANER_COMMITS_RETAINED.key() -> "50", 
       HoodieCompactionConfig.MIN_COMMITS_TO_KEEP.key() -> "51", 
       HoodieCompactionConfig.MAX_COMMITS_TO_KEEP.key() -> "52",
       HoodieCompactionConfig.ASYNC_CLEAN.key() -> "false", 
       HoodieCompactionConfig.INLINE_COMPACT.key() -> "true",
       HoodieMetricsConfig.TURN_METRICS_ON.key() -> "true",
       HoodieMetricsConfig.METRICS_REPORTER_TYPE_VALUE.key() -> 
MetricsReporterType.DATADOG.name(),
       HoodieMetricsDatadogConfig.API_SITE_VALUE.key() -> "US",
       HoodieMetricsDatadogConfig.METRIC_PREFIX_VALUE.key() -> 
"tacticalnovusingest.hudi",
       HoodieMetricsDatadogConfig.API_KEY_SUPPLIER.key() -> "c%%%",
       HoodieMetadataConfig.ENABLE.key() -> "false",
       HoodieWriteConfig.ROLLBACK_USING_MARKERS_ENABLE.key() -> "false",
   
   As per my read compaction.schedule.enable we can turn it to false so that 
compaction stops but I didnt find any Java setter or getter to use this 
property.
   Also Im planning to flip these things below in my writer
   Please review
   
       HoodieCompactionConfig.AUTO_CLEAN.key() -> "false",
       HoodieCompactionConfig.ASYNC_CLEAN.key() -> "true",
       HoodieMetricsConfig.TURN_METRICS_ON.key() -> "true",
       HoodieMetricsConfig.METRICS_REPORTER_TYPE_VALUE.key() -> 
MetricsReporterType.DATADOG.name(),
       HoodieMetricsDatadogConfig.API_SITE_VALUE.key() -> "US",
       HoodieMetricsDatadogConfig.METRIC_PREFIX_VALUE.key() -> 
"tacticalnovusingest.hudi",
       HoodieMetricsDatadogConfig.API_KEY_SUPPLIER.key() -> "%%%%%%",
       HoodieMetadataConfig.ENABLE.key() -> "false",
       HoodieWriteConfig.ROLLBACK_USING_MARKERS_ENABLE.key() -> "false",
   
   Please help 
   
   
   A clear and concise description of the problem.
   
   **To Reproduce**
   
   
   
   **Environment Description**
   
   * Hudi version : 0.11.1
   
   * Spark version : 3.12
   
   * Hive version : NA
   
   * Hadoop version :NA
   
   * Storage (HDFS/S3/GCS..) : S3
   
   * Running on Docker? (yes/no) :NO
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to