flashJd commented on PR #9113:
URL: https://github.com/apache/hudi/pull/9113#issuecomment-1625108838

   > Add the ability `OVERWRITE_DYNAMIC` should be enough? Not sure what 
`implement v2 BATCH_WRITE in datasource V2 feature as a whole` means, this is 
my wechat: rexboom_an, can add me so we can fast align this.
   > 
   > > why not use an extra hudi-configure to control it like iceberg&delta
   > 
   > I mean we also need to add an option in `HoodieOptionConfig` to control 
this, but besides, can respect `spark.sql.sources.partitionOverwriteMode` since 
spark users are familiar with it.
   
   Make a summary as we discussed, when come accorss an insert overwrite :
   1. First respect `hoodie.datasource.write.operation`, if it is configured, 
then use it to insert overwrite partition/table
   2. If `hoodie.datasource.write.operation` not configured, use a new hoodie 
config to control the behavor like iceberg/deltalake
   3. if the new config is not set, then respect spark's 
`spark.sql.sources.partitionOverwriteMode` and it's default value is static
   The behavor is not forward compatible but aligned with spark


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to