KnightChess commented on PR #9113:
URL: https://github.com/apache/hudi/pull/9113#issuecomment-1623226684

   - Inside us, in order to compatible the user's habit of using hive, we use 
`hive.exec.dynamic.partition.mode` to be compatible with it. 
   - In spark, this behavior will be different in diffrent datasource. For 
example, if write to hive table, `spark.sql.sources.partitionOverwriteMode` is 
not effective, in `InsertIntoHiveTable`, it only effect by 
`hive.exec.dynamic.xxx`, which has it's own controller action.
   - So, if we want to use spark behavior here, I think the behavior should 
belong to the hudi itself, not engine. and all engines are implemented in this 
way by use its own parameters. what about you think?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to