zyl891229 commented on issue #9991:
URL: https://github.com/apache/hudi/issues/9991#issuecomment-1801056154

   > @zyl891229 Yes you are right, there is a issue with bulk_insert operation 
type along with combination of these two things. Although upsert/insert is 
running fine, you may use that. I confirmed both cases are failing , when using 
one partition col or two partition cols
   > 
   > JIRA to track - https://issues.apache.org/jira/browse/HUDI-7040
   > 
   > Reproducible code -
   > 
   > ```
   > spark = get_spark_session(spark_version="3.2", hudi_version="0.14.0")
   > 
   > insert_df = get_insert_df(spark, 10)
   > 
   > hudi_configs = {
   >     "hoodie.table.name": TABLE_NAME,
   >     "hoodie.datasource.write.recordkey.field": "UUID",
   >     "hoodie.datasource.write.precombine.field": "Name",
   >     "hoodie.datasource.write.partitionpath.field": "Company",
   >     "hoodie.datasource.write.operation": "bulk_insert",
   >     "hoodie.datasource.write.hive_style_partitioning": "true",
   >     "hoodie.populate.meta.fields": "false",
   >     "hoodie.datasource.write.drop.partition.columns": "true"
   > }
   > 
   > 
insert_df.write.format("hudi").mode("append").options(**hudi_configs).save(PATH)
   > ```
   
   
   
   > @zyl891229 Yes you are right, there is a issue with bulk_insert operation 
type along with combination of these two things. Although upsert/insert is 
running fine, you may use that. I confirmed both cases are failing , when using 
one partition col or two partition cols
   > 
   > JIRA to track - https://issues.apache.org/jira/browse/HUDI-7040
   > 
   > Reproducible code -
   > 
   > ```
   > spark = get_spark_session(spark_version="3.2", hudi_version="0.14.0")
   > 
   > insert_df = get_insert_df(spark, 10)
   > 
   > hudi_configs = {
   >     "hoodie.table.name": TABLE_NAME,
   >     "hoodie.datasource.write.recordkey.field": "UUID",
   >     "hoodie.datasource.write.precombine.field": "Name",
   >     "hoodie.datasource.write.partitionpath.field": "Company",
   >     "hoodie.datasource.write.operation": "bulk_insert",
   >     "hoodie.datasource.write.hive_style_partitioning": "true",
   >     "hoodie.populate.meta.fields": "false",
   >     "hoodie.datasource.write.drop.partition.columns": "true"
   > }
   > 
   > 
insert_df.write.format("hudi").mode("append").options(**hudi_configs).save(PATH)
   > ```
   
   Thank you for your reply. Is there any idea or method can provide? 
   I will first modify this problem in fork. We need to delete useless columns 
to minimize storage space and save cost


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to