logesr opened a new issue, #12685:
URL: https://github.com/apache/hudi/issues/12685

   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   We are running a glue job to write 2 hudi tables. It is an hourly job. It 
usually takes around 10-12 mins. But, occasionally we are seeing the job taking 
more than 25 mins. When deep dived into it, the count stage of Doing partition 
and writing data is taking more time for one partition even when the number of 
records is less. Please note that we are getting 100k records to 200k records 
on an average every hour. We could not find any specific patterns on which 
partition the issue is happening
   
   
![Image](https://github.com/user-attachments/assets/15bef9d0-32af-4d56-b0b3-e9a623d47b93)
   
   **To Reproduce**
   
   hudi_options = {
       "hoodie.datasource.write.partitionpath.urlencode": "true",
       'hoodie.datasource.write.table.type': 'COPY_ON_WRITE',
       'hoodie.datasource.write.reconcile.schema': 'true',
       'hoodie.schema.on.read.enable': 'true',
       'hoodie.table.base.file.format': 'PARQUET',
       'hoodie.parquet.compression.codec': 'snappy',
       "hoodie.datasource.write.hive_style_partitioning": "true",
       "hoodie.datasource.hive_sync.enable": "true",
       "hoodie.datasource.hive_sync.partition_extractor_class": 
"org.apache.hudi.hive.MultiPartKeysValueExtractor",
       "hoodie.datasource.hive_sync.use_jdbc": "false",
       "hoodie.datasource.hive_sync.mode": "hms",
       "hoodie.datasource.hive_sync.support_timestamp": "true",
       'hoodie.parquet.max.file.size': '125829120',
       'hoodie.parquet.small.file.limit': '104857600',
       'hoodie.copyonwrite.record.size.estimate': '5120'
   }
   
   
   def write_df_to_hudi(df, target_path, partition_key_column, database, 
table_name, primary_key, mode='append',
                        timestamp_column_name='timestamp'):
   
       hudi_options.update(
           {
               'hoodie.table.name': table_name,
               'hoodie.datasource.write.recordkey.field': primary_key,
               'hoodie.datasource.write.partitionpath.field': 
partition_key_column,
               'hoodie.datasource.write.precombine.field': 
timestamp_column_name,
               "hoodie.datasource.hive_sync.database": database,
               "hoodie.datasource.hive_sync.table": table_name,
               "hoodie.datasource.hive_sync.partition_fields": 
partition_key_column,
   
           })
       df.write.format('org.apache.hudi') \
           .option('hoodie.datasource.write.operation', 'INSERT') \
           .options(**hudi_options).mode(mode).save(target_path)
   
   **Expected behavior**
   
   We are looking for consistent run times or it should be directly associated 
with the number of records.
   
   **Environment Description**
   
   * Hudi version :0.12.1
   
   * Spark version : 3.3
   
   * Hive version :
   
   * Hadoop version :
   
   * Storage (HDFS/S3/GCS..) : s3
   
   * Running on Docker? (yes/no) : no
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to