This is an automated email from the ASF dual-hosted git repository.

danny0405 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/hudi.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 0afc44c1a73 [HUDI-7838][DOCS] Remove the option 
hoodie.schema.cache.enable (#11506)
0afc44c1a73 is described below

commit 0afc44c1a737d7b0440e5518e37a06b307057ef3
Author: Vova Kolmakov <[email protected]>
AuthorDate: Wed Jun 26 10:06:49 2024 +0700

    [HUDI-7838][DOCS] Remove the option hoodie.schema.cache.enable (#11506)
---
 website/docs/configurations.md | 1 -
 1 file changed, 1 deletion(-)

diff --git a/website/docs/configurations.md b/website/docs/configurations.md
index 4ca3a09e81e..278be1f5afa 100644
--- a/website/docs/configurations.md
+++ b/website/docs/configurations.md
@@ -963,7 +963,6 @@ Configurations that control write behavior on Hudi tables. 
These can be directly
 | 
[hoodie.rollback.instant.backup.enabled](#hoodierollbackinstantbackupenabled)   
                                                  | false                       
                                 | Backup instants removed during rollback and 
restore (useful for debugging)<br />`Config Param: 
ROLLBACK_INSTANT_BACKUP_ENABLED`                                                
                                                                                
                                         [...]
 | [hoodie.rollback.parallelism](#hoodierollbackparallelism)                    
                                                     | 100                      
                                    | This config controls the parallelism for 
rollback of commits. Rollbacks perform deletion of files or logging delete 
blocks to file groups on storage in parallel. The configure value limits the 
parallelism so that the number of Spark tasks do not exceed the value. If 
rollback is slow due to the  [...]
 | [hoodie.rollback.using.markers](#hoodierollbackusingmarkers)                 
                                                     | true                     
                                    | Enables a more efficient mechanism for 
rollbacks based on the marker files generated during the writes. Turned on by 
default.<br />`Config Param: ROLLBACK_USING_MARKERS_ENABLE`                     
                                                                                
                   [...]
-| [hoodie.schema.cache.enable](#hoodieschemacacheenable)                       
                                                     | false                    
                                    | cache query internalSchemas in 
driver/executor side<br />`Config Param: ENABLE_INTERNAL_SCHEMA_CACHE`          
                                                                                
                                                                                
                         [...]
 | [hoodie.sensitive.config.keys](#hoodiesensitiveconfigkeys)                   
                                                     | 
ssl,tls,sasl,auth,credentials                                | Comma separated 
list of filters for sensitive config keys. Hudi Streamer will not print any 
configuration which contains the configured filter. For example with a 
configured filter `ssl`, value for config `ssl.trustore.location` would be 
masked.<br />`Config Param: SENSITIVE_CONFIG_KEYS_FILTER` [...]
 | 
[hoodie.skip.default.partition.validation](#hoodieskipdefaultpartitionvalidation)
                                                 | false                        
                                | When table is upgraded from pre 0.12 to 0.12, 
we check for "default" partition and fail if found one. Users are expected to 
rewrite the data in those partitions. Enabling this config will bypass this 
validation<br />`Config Param: SKIP_DEFAULT_PARTITION_VALIDATION`<br />`Since 
Version: 0.12.0`  [...]
 | [hoodie.table.base.file.format](#hoodietablebasefileformat)                  
                                                     | PARQUET                  
                                    | File format to store all the base file 
data. org.apache.hudi.common.model.HoodieFileFormat: Hoodie file formats.     
PARQUET(default): Apache Parquet is an open source, column-oriented data file 
format designed for efficient data storage and retrieval. It provides efficient 
data compression and [...]

Reply via email to