hudi-bot opened a new issue, #16245:
URL: https://github.com/apache/hudi/issues/16245

   Currently for using bulk insert with insert overwrite operations in Spark 
Datasource, user would currently have to set 
`hoodie.bulkinsert.overwrite.operation.type` to 
insert_overwrite_table(OVERWRITE) or insert_overwrite(APPEND) and use the 
corresponding save modes. Since this is an internal config, it should not be 
exposed. The jira aims to find an easier way to support the feature for the 
users through a new config or a different config altogether.
   One idea is to deprecate hoodie.spark.sql.insert.into.operation and create a 
new config which can be shared by both sql and datasource.
   
   ## JIRA info
   
   - Link: https://issues.apache.org/jira/browse/HUDI-6889
   - Type: Bug


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to