abhisheksahani91 opened a new issue, #10270:
URL: https://github.com/apache/hudi/issues/10270

   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   Hi we are maintaining a HUDI MOR table without any partition as it 
represents user data with random updates, In our use case 95% of the ingestion 
is related to data update
   
   We are using async compaction, with compaction strategy NUM_OR_TIME
   
   
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1. Firstly we are copying the existing data (83 million records) from Mongo 
using the Kafka Mongo source connector 
   2. Once the initial sync is completed, the connector starts publishing the 
updates using the change stream at the rate of 
   3. post 5 delta commit, compaction is getting triggered but it is taking a 
long time for example 20 minutes
   
   4 . Script used for running HUDI GLUE job
   `import com.amazonaws.services.glue.GlueContext
   import com.amazonaws.services.glue.MappingSpec
   import com.amazonaws.services.glue.errors.CallSite
   import com.amazonaws.services.glue.util.GlueArgParser
   import com.amazonaws.services.glue.util.Job
   import com.amazonaws.services.glue.util.JsonOptions
   import org.apache.spark.SparkContext
   import scala.collection.JavaConverters._
   import org.apache.spark.sql.SparkSession
   import org.apache.spark.api.java.JavaSparkContext
   import org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer
   import org.apache.hudi.utilities.deltastreamer.SchedulerConfGenerator
   import org.apache.hudi.utilities.UtilHelpers
   object GlueApp {
     def main(sysArgs: Array[String]) {
     val args = GlueArgParser.getResolvedOptions(
         sysArgs, 
Seq("JOB_NAME","TARGET_BUCKET","CONFIG_BUCKET","KAFKA_BOOTSTRAP_SERVERS", 
"TARGET_TABLE", "SOURCE_TOPIC", "HOODIE_RECORDKEY_FIELD", 
"HOODIE_PRECOMBINE_FIELD","PARTITION_FIELD","TARGET_DATABASE").toArray)
     var config = Array(
           "--schemaprovider-class", 
"org.apache.hudi.utilities.schema.FilebasedSchemaProvider",
           "--source-class", 
"org.apache.hudi.utilities.sources.JsonKafkaSource",
           "--source-ordering-field", "ts_ux",
           "--target-base-path", "s3://"+ args("TARGET_BUCKET") + 
"/hudi_data_lake_prod5/" + args("TARGET_TABLE") + "/",
           "--target-table", args("TARGET_TABLE"),
           "--table-type" , "MERGE_ON_READ",
           //"--table-type" , "COPY_ON_WRITE",
           "--enable-hive-sync",
           "--hoodie-conf", 
"hoodie.deltastreamer.schemaprovider.source.schema.file=s3://schema/latest-mongodb-userdata-prod-schema.avsc",
           "--hoodie-conf", 
"hoodie.deltastreamer.schemaprovider.target.schema.file=s3://schema/latest-mongodb-userdata-prod-schema.avsc",
           "--hoodie-conf", "hoodie.deltastreamer.source.kafka.topic="+ 
args("SOURCE_TOPIC") + "",
           //"--hoodie-conf", "hoodie.datasource.hive_sync.table="+ 
args("TARGET_TABLE") + "",
           "--hoodie-conf", "hoodie.datasource.write.recordkey.field=" 
+args("HOODIE_RECORDKEY_FIELD") + "",
           "--hoodie-conf", "hoodie.datasource.write.precombine.field=" 
+args("HOODIE_PRECOMBINE_FIELD") + "",
           "--hoodie-conf", "hoodie.datasource.hive_sync.enable=true",
           "--hoodie-conf", "hoodie.datasource.hive_sync.database="+ 
args("TARGET_DATABASE") + "",
           "--hoodie-conf", "hoodie.datasource.hive_sync.table="+ 
args("TARGET_TABLE") + "",
           "--hoodie-conf", "hoodie.datasource.write.operation=UPSERT",
           "--hoodie-conf", "hoodie.datasource.hive_sync.use_jdbc=false",
           "--hoodie-conf", 
"hoodie.datasource.hive_sync.partition_extractor_class=org.apache.hudi.hive.NonPartitionedExtractor",
           "--hoodie-conf", 
"hoodie.datasource.write.keygenerator.class=org.apache.hudi.keygen.NonpartitionedKeyGenerator",
           "--hoodie-conf", "security.protocol=PLAINTEXT",
           "--hoodie-conf", "auto.offset.reset=latest",
           "--hoodie-conf", "bootstrap.servers=" + 
args("KAFKA_BOOTSTRAP_SERVERS"),
           "--hoodie-conf", "group.id=native-hudi-job",
           "--hoodie-conf", "hoodie.kafka.allow.commit.on.errors=true",
           "--hoodie-conf", "hoodie.write.allow_null_updates",
           "--hoodie-conf", "hoodie.index.type=SIMPLE",
           "--hoodie-conf", "hoodie.upsert.shuffle.parallelism=200",
           "--hoodie-conf", "hoodie.finalize.write.parallelism=400",
           "--hoodie-conf", "hoodie.markers.delete.parallelism=200",
           "--hoodie-conf", "hoodie.file.listing.parallelism=400",
           "--hoodie-conf", "hoodie.cleaner.parallelism=400",
           "--hoodie-conf", "hoodie.archive.delete.parallelism=200",
           "--hoodie-conf", "compaction.trigger.strategy=NUM_OR_TIME",
           "--hoodie-conf", 
"hoodie.compact.inline.trigger.strategy=NUM_OR_TIME",
           "--hoodie-conf", "compaction.schedule.enabled=true",
           "--hoodie-conf", "compaction.async.enabled=true",
           "--hoodie-conf", "compaction.delta_commits=5",
           "--hoodie-conf", "hoodie.compact.inline.max.delta.commits=5",
           "--hoodie-conf", "compaction.delta_seconds=600",
           "--hoodie-conf", "hoodie.compact.inline.max.delta.seconds=600",
           "--hoodie-conf", "hoodie.metrics.on=true",
           "--hoodie-conf", "hoodie.metrics.reporter.type=CLOUDWATCH",
           "--hoodie-conf", "hoodie.deltastreamer.kafka.commit_on_errors=true",
           "--hoodie-conf", "hoodie.schema.on.read.enable=true",
           "--hoodie-conf", "hoodie.keep.max.commits=10",
           "--hoodie-conf", "hoodie.metadata.keep.min.commits=8",
           "--hoodie-conf", "hoodie.keep.min.commits=8",
           "--hoodie-conf", "hoodie.cleaner.commits.retained=5",
           "--hoodie-conf", "hoodie.write.markers.type=direct",
           "--hoodie-conf", "hoodie.embed.timeline.server=false",
           "--continuous"
           //"--commit-on-errors"
       )
   val cfg = HoodieDeltaStreamer.getConfig(config)
   val additionalSparkConfigs = 
SchedulerConfGenerator.getSparkSchedulingConfigs(cfg)
   val jssc = UtilHelpers.buildSparkContext("delta-streamer-test", "jes", 
additionalSparkConfigs)
      val spark = jssc.sc
      val glueContext: GlueContext = new GlueContext(spark)
      Job.init(args("JOB_NAME"), glueContext, args.asJava)
   try {
         new HoodieDeltaStreamer(cfg, jssc).sync();
       } finally {
         jssc.stop();
       }
       Job.commit()
     }
   }`   
   
   
    
   
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   We are expecting the compaction job to not exceed more than 3 minutes and 
also it should not block the data ingestion 
   
   **Environment Description**
   
   * Hudi version : 0.12.1
   * Gue version : 4.0
   * Storage (HDFS/S3/GCS..) : S3
   
   
   
   **Stacktrace**
   <img width="1264" alt="Screenshot 2023-12-07 at 3 00 50 PM" 
src="https://github.com/apache/hudi/assets/122790088/05c4dc9a-aaa6-4931-8e73-9ca194d40613";>
   
   <img width="1268" alt="Screenshot 2023-12-07 at 2 59 03 PM" 
src="https://github.com/apache/hudi/assets/122790088/c5e7b55d-a32f-4031-a156-9c9fcdbea408";>
   
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to