dongkelun commented on pull request #4083:
URL: https://github.com/apache/hudi/pull/4083#issuecomment-1001969965


   > @dongkelun @xushiyan I'm sorry of not supporting this pr to solve the 
problems that `set the start query time of the table` and `query 
incrementally`. Some points we should think about:
   > 
   > 1. as this pr, just work for spark-sql. What about Spark DataFrame Write? 
We should support both.
   > 2. after adding `database` config, no matter get the `database` value by 
using a individual config like `hoodie.datasource.write.database.name` or 
parsing from the existing 
`hoodie.datasource.write.table.name`/`hoodie.table.name` when enable 
`hoodie.sql.uses.database.table.name`, we'll have four related options: 
`hoodie.datasource.hive_sync.table`, `hoodie.datasource.hive_sync.database` and 
the two mentioned above. Then, user have to learn these. Can we combine and 
simplify these?
   > 
   > IMO, Hudi with a mountain of configs already has a high threshold of use. 
We should choose some solutions which balance the functionality and use 
experience as far as possible.
   @YannByron  Hello
   1、About Spark DataFrame Write, we can use ` hoodie table. Name ` to specify 
the table name
   2、Because the database name can be specified when creating tables in Spark 
SQL, it is not through ` hoodie database. name ' and other configurations are 
specified. I think ` hoodie sql. use. database. table. name `  is just a switch 
to judge whether SQL needs to be given ` hoodie table. name`  specify the 
database name. It does not conflict with other configurations
   As for combine other duplicate configuration items, I think we can solve 
them in other separate PR
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to