danny0405 commented on code in PR #9056: URL: https://github.com/apache/hudi/pull/9056#discussion_r1248375900
########## website/docs/configurations.md: ########## @@ -20,6 +20,7 @@ hoodie.datasource.hive_sync.support_timestamp false It helps to have a central configuration file for your common cross job configurations/tunings, so all the jobs on your cluster can utilize it. It also works with Spark SQL DML/DDL, and helps avoid having to pass configs inside the SQL statements. By default, Hudi would load the configuration file under `/etc/hudi/conf` directory. You can specify a different configuration directory location by setting the `HUDI_CONF_DIR` environment variable. +- [**Parquet Configs**](#PARQUET_CONFIG): These configs makes it possible to bring native parquet features - [**Spark Datasource Configs**](#SPARK_DATASOURCE): These configs control the Hudi Spark Datasource, providing ability to define keys/partitioning, pick out the write operation, specify how to merge records or choosing query type to read. Review Comment: Should we put it under `Spark Datasource Configs` ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
