yangzhg commented on a change in pull request #5975:
URL: https://github.com/apache/incubator-doris/pull/5975#discussion_r652293370



##########
File path: docs/en/administrator-guide/config/fe_config.md
##########
@@ -122,681 +122,1873 @@ There are two ways to configure FE configuration items:
 
 ## Configurations
 
-### `agent_task_resend_wait_time_ms`
+### max_dynamic_partition_num
 
-This configuration will decide whether to resend agent task when create_time 
for agent_task is set, only when current_time - create_time > 
agent_task_resend_wait_time_ms can ReportHandler do resend agent task.     
+Default:500
 
-This configuration is currently mainly used to solve the problem of repeated 
sending of `PUBLISH_VERSION` agent tasks. The current default value of this 
configuration is 5000, which is an experimental value.
+IsMutable:true
 
-Because there is a certain time delay between submitting agent tasks to 
AgentTaskQueue and submitting to be, Increasing the value of this configuration 
can effectively solve the problem of repeated sending of agent tasks,
+MasterOnly:true
 
-But at the same time, it will cause the submission of failed or failed 
execution of the agent task to be executed again for an extended period of time.
+Used to limit the maximum number of partitions that can be created when 
creating a dynamic partition table,  to avoid creating too many partitions at 
one time. The number is determined by "start" and "end" in the dynamic 
partition parameters.。 
 
-### `alter_table_timeout_second`
+### grpc_max_message_size_bytes
 
-### `async_load_task_pool_size`
+Default:1G
 
-This configuration is just for compatible with old version, this config has 
been replaced by async_loading_load_task_pool_size, it will be removed in the 
future.
+Used to set the initial flow window size of the GRPC client channel, and also 
used to max message size.  When the result set is large, you may need to 
increase this value.
 
-### `async_loading_load_task_pool_size`
+### enable_outfile_to_local
 
-The loading_load task executor pool size. This pool size limits the max 
running loading_load tasks.
+Default:false
+Whether to allow the outfile function to export the results to the local disk.
 
-Currently, it only limits the loading_load task of broker load.
+### enable_access_file_without_broker
 
-### `async_pending_load_task_pool_size`
+Default:false
 
-The pending_load task executor pool size. This pool size limits the max 
running pending_load tasks.
+IsMutable:true
 
-Currently, it only limits the pending_load task of broker load and spark load.
+MasterOnly:true
 
-It should be less than 'max_running_txn_num_per_db'
+This config is used to try skip broker when access bos or other cloud storage 
via broker
 
-### `audit_log_delete_age`
+### enable_bdbje_debug_mode
 
-### `audit_log_dir`
+Default:false
 
-### `audit_log_modules`
+If set to true, FE will be started in BDBJE debug mode
 
-### `audit_log_roll_interval`
+### enable_fe_heartbeat_by_thrift
 
-### `audit_log_roll_mode`
+Default:false
 
-### `audit_log_roll_num`
+IsMutable:true
 
-### `auth_token`
+MasterOnly:true
 
-### `autocommit`
+This config is used to solve fe heartbeat response read_timeout problem,  When 
config is set to be true, master will get fe heartbeat response by thrift 
protocol  instead of http protocol. In order to maintain compatibility with the 
old version,  the default is false, and the configuration cannot be changed to 
true until all fe are upgraded。
 
-### `auto_increment_increment`
+### enable_alpha_rowset
 
-### `backup_job_default_timeout_ms`
+Default:false
 
-### `backup_plugin_path`
+Whether to support the creation of alpha rowset tables.  The default is false 
and should only be used in emergency situations,  this config should be remove 
in some future version
 
-### `balance_load_score_threshold`
+### enable_http_server_v2
 
-### `batch_size`
+Default:The default is true after the official 0.14.0 version is released, and 
the default is false before
 
-### `bdbje_heartbeat_timeout_second`
+HTTP Server V2 is implemented by SpringBoot. It uses an architecture that 
separates the front and back ends. Only when httpv2 is enabled can users use 
the new front-end UI interface
 
-### `bdbje_lock_timeout_second`
+### http_max_file_size
 
-### `bdbje_replica_ack_timeout_second`
+### http_max_request_size
 
-Metadata will be synchronously written to multiple Follower FEs. This 
parameter is used to control the timeout period for Master FE to wait for 
Follower FE to send ack. When the written data is large, the ack time may be 
longer. If it times out, the metadata writing will fail and the FE process will 
exit. At this time, you can increase this parameter appropriately.
+Default:100M
 
-Default: 10 seconds.
+The above two parameters are the http v2 version, the web maximum upload file 
limit, the default is 100M, you can modify it according to your needs
 
-### `broker_load_default_timeout_second`
+### frontend_address
 
-### `brpc_idle_wait_max_time`
+Status: Deprecated, not recommended use. This parameter may be deleted later 
Type: string Description: Explicitly set the IP address of FE instead of using 
*InetAddress.getByName* to get the IP address. Usually in 
*InetAddress.getByName* When the expected results cannot be obtained. Only IP 
address is supported, not hostname. Default value: 0.0.0.0
 
-### `brpc_number_of_concurrent_requests_processed`
+### default_max_filter_ratio
 
-### `capacity_used_percent_high_water`
+Default:0
 
-### `catalog_trash_expire_second`
+IsMutable:true
 
-### `catalog_try_lock_timeout_ms`
+MasterOnly:true
 
-### `character_set_client`
+Maximum percentage of data that can be filtered (due to reasons such as data 
is irregularly) , The default value is 0.
 
-### `character_set_connection`
+### default_db_data_quota_bytes
 
-### `character_set_results`
+Default:1TB
 
-### `character_set_server`
+IsMutable:true
 
-### `check_consistency_default_timeout_second`
+MasterOnly:true
 
-### `check_java_version`
+Used to set the default database data quota size. To set the quota size of a 
single database, you can use:
 
-### `clone_capacity_balance_threshold`
+```
+Set the database data quota, the unit is:B/K/KB/M/MB/G/GB/T/TB/P/PB
+ALTER DATABASE db_name SET DATA QUOTA quota;
+View configuration
+show data (Detail:HELP SHOW DATA)
+```
 
-### `clone_checker_interval_second`
+### enable_batch_delete_by_default
 
-### `clone_distribution_balance_threshold`
+Default:false
 
-### `clone_high_priority_delay_second`
+IsMutable:true
 
-### `clone_job_timeout_second`
+MasterOnly:true
 
-### `clone_low_priority_delay_second`
+Whether to add a delete sign column when create unique table
 
-### `clone_max_job_num`
+### recover_with_empty_tablet
 
-### `clone_normal_priority_delay_second`
+Default:false
 
-### `cluster_id`
+IsMutable:true
 
-### `cluster_name`
+MasterOnly:true
 
-### `codegen_level`
+ In some very special circumstances, such as code bugs, or human misoperation, 
etc., all replicas of some tablets may be lost. In this case, the data has been 
substantially lost. However, in some scenarios, the business still hopes to 
ensure that the query will not report errors even if there is data loss, and 
reduce the perception of the user layer. At this point, we can use the blank 
Tablet to fill the missing replica to ensure that the query can be executed 
normally.
 
-### `collation_connection`
+Set to true so that Doris will automatically use blank replicas to fill 
tablets which all replicas have been damaged or missing
 
-### `collation_database`
+### max_allowed_in_element_num_of_delete
 
-### `collation_server`
+Default:1024
 
-### `consistency_check_end_time`
+IsMutable:true
 
-### `consistency_check_start_time`
+MasterOnly:true
 
-### `custom_config_dir`
+This configuration is used to limit element num of InPredicate in delete 
statement.
 
-Configure the location of the `fe_custom.conf` file. The default is in the 
`conf/` directory.
+### cache_result_max_row_count
 
-In some deployment environments, the `conf/` directory may be overwritten due 
to system upgrades. This will cause the user modified configuration items to be 
overwritten. At this time, we can store `fe_custom.conf` in another specified 
directory to prevent the configuration file from being overwritten.
+Default:3000
 
-### `db_used_data_quota_update_interval_secs`
+IsMutable:true
 
-For better data load performance, in the check of whether the amount of data 
used by the database before data load exceeds the quota, we do not calculate 
the amount of data already used by the database in real time, but obtain the 
periodically updated value of the daemon thread.
+MasterOnly:false
 
-This configuration is used to set the time interval for updating the value of 
the amount of data used by the database.
+In order to avoid occupying too much memory, the maximum number of rows that 
can be cached is 2000 by default. If this threshold is exceeded, the cache 
cannot be set
 
-### `default_rowset_type`
+### cache_last_version_interval_second
 
-### `default_storage_medium`
+Default:900
 
-### `delete_thread_num`
+IsMutable:true
 
-### `desired_max_waiting_jobs`
+MasterOnly:false
 
-### `disable_balance`
+The time interval of the latest partitioned version of the table refers to the 
time interval between the data update and the current version. It is generally 
set to 900 seconds, which distinguishes offline and real-time import
 
-### `disable_cluster_feature`
+### cache_enable_partition_mode
 
-### `disable_colocate_balance`
+Default:true
 
-### `disable_colocate_join`
+IsMutable:true
 
-### `disable_colocate_relocate`
+MasterOnly:false
 
-### `disable_hadoop_load`
+When this switch is turned on, the query result set will be cached according 
to the partition. If the interval between the query table partition time and 
the query time is less than cache_last_version_interval_second, the result set 
will be cached according to the partition.
 
-### `disable_load_job`
+Part of the data will be obtained from the cache and some data from the disk 
when querying, and the data will be merged and returned to the client.
 
-### `disable_streaming_preaggregations`
+### cache_enable_sql_mode
 
-### `div_precision_increment`
+Default:true
 
-### `dpp_bytes_per_reduce`
+IsMutable:true
 
-### `dpp_config_str`
+MasterOnly:false
 
-### `dpp_default_cluster`
+If this switch is turned on, the SQL query result set will be cached. If the 
interval between the last visit version time in all partitions of all tables in 
the query is greater than cache_last_version_interval_second, and the result 
set is less than cache_result_max_row_count, the result set will be cached, and 
the next same SQL will hit the cache
 
-### `dpp_default_config_str`
+If set to true, fe will enable sql result caching. This option is suitable for 
offline data update scenarios
 
-### `dpp_hadoop_client_path`
+|                        | case1 | case2 | case3 | case4 |
+| ---------------------- | ----- | ----- | ----- | ----- |
+| enable_sql_cache       | false | true  | true  | false |
+| enable_partition_cache | false | false | true  | true  |
 
-### `drop_backend_after_decommission`
+### min_clone_task_timeout_sec  和 max_clone_task_timeout_sec
 
-This configuration is used to control whether the system drops the BE after 
successfully decommissioning the BE. If true, the BE node will be deleted after 
the BE is successfully offline. If false, after the BE successfully goes 
offline, the BE will remain in the DECOMMISSION state, but will not be dropped.
+Default:Minimum 3 minutes, maximum two hours
 
-This configuration can play a role in certain scenarios. Assume that the 
initial state of a Doris cluster is one disk per BE node. After running for a 
period of time, the system has been vertically expanded, that is, each BE node 
adds 2 new disks. Because Doris currently does not support data balancing among 
the disks within the BE, the data volume of the initial disk may always be much 
higher than the data volume of the newly added disk. At this time, we can 
perform manual inter-disk balancing by the following operations:
+IsMutable:true
 
-1. Set the configuration item to false.
-2. Perform a decommission operation on a certain BE node. This operation will 
migrate all data on the BE to other nodes.
-3. After the decommission operation is completed, the BE will not be dropped. 
At this time, cancel the decommission status of the BE. Then the data will 
start to balance from other BE nodes back to this node. At this time, the data 
will be evenly distributed to all disks of the BE.
-4. Perform steps 2 and 3 for all BE nodes in sequence, and finally achieve the 
purpose of disk balancing for all nodes.
+MasterOnly:true
 
-### `dynamic_partition_check_interval_seconds`
+Type: long Description: Used to control the maximum timeout of a clone task. 
The unit is second. Default value: 7200 Dynamic modification: yes
 
-### `dynamic_partition_enable`
+Can cooperate with `mix_clone_task_timeout_sec` to control the maximum and 
minimum timeout of a clone task. Under normal circumstances, the timeout of a 
clone task is estimated by the amount of data and the minimum transfer rate 
(5MB/s). In some special cases, these two configurations can be used to set the 
upper and lower bounds of the clone task timeout to ensure that the clone task 
can be completed successfully.
 
-### `edit_log_port`
+### agent_task_resend_wait_time_ms
 
-### `edit_log_roll_num`
+Default:5000
 
-### `edit_log_type`
+IsMutable:true
 
-### `enable_auth_check`
+MasterOnly:true
 
-### `enable_batch_delete_by_default`
-Whether to add a delete sign column when create unique table
+This configuration will decide whether to resend agent task when create_time 
for agent_task is set, only when current_time - create_time > 
agent_task_resend_wait_time_ms can ReportHandler do resend agent task.
 
-### `enable_deploy_manager`
+This configuration is currently mainly used to solve the problem of repeated 
sending of `PUBLISH_VERSION` agent tasks. The current default value of this 
configuration is 5000, which is an experimental value.
 
-### `enable_insert_strict`
+Because there is a certain time delay between submitting agent tasks to 
AgentTaskQueue and submitting to be, Increasing the value of this configuration 
can effectively solve the problem of repeated sending of agent tasks,
 
-### `enable_local_replica_selection`
+But at the same time, it will cause the submission of failed or failed 
execution of the agent task to be executed again for an extended period of time
 
-### `enable_materialized_view`
+### enable_odbc_table
 
-This configuration is used to turn on and off the creation of materialized 
views. If set to true, the function to create a materialized view is enabled. 
The user can create a materialized view through the `CREATE MATERIALIZED VIEW` 
command. If set to false, materialized views cannot be created.
+Default:false
 
-If you get an error `The materialized view is coming soon` or `The 
materialized view is disabled` when creating the materialized view, it means 
that the configuration is set to false and the function of creating the 
materialized view is turned off. You can start to create a materialized view by 
modifying the configuration to true.
+IsMutable:true
 
-This variable is a dynamic configuration, and users can modify the 
configuration through commands after the FE process starts. You can also modify 
the FE configuration file and restart the FE to take effect.
+MasterOnly:true
 
-### `enable_metric_calculator`
+Whether to enable the ODBC table, it is not enabled by default. You need to 
manually configure it when you use it. This parameter can be set by: ADMIN SET 
FRONTEND CONFIG("key"="value")
 
-### `enable_spilling`
+### enable_spark_load
 
-### `enable_token_check`
+Default:false
 
-### `es_state_sync_interval_second`
+IsMutable:true
 
-### `event_scheduler`
+MasterOnly:true
 
-### `exec_mem_limit`
+Whether to enable spark load temporarily, it is not enabled by default
 
-### `export_checker_interval_second`
+### enable_strict_storage_medium_check
 
-### `export_running_job_num_limit`
+Default:false
 
-### `export_tablet_num_per_task`
+IsMutable:true
 
-### `export_task_default_timeout_second`
+MasterOnly:true
 
-### `expr_children_limit`
+This configuration indicates that when the table is being built, it checks for 
the presence of the appropriate storage medium in the cluster. For example, 
when the user specifies that the storage medium is' SSD 'when the table is 
built, but only' HDD 'disks exist in the cluster,
 
-### `expr_depth_limit`
+If this parameter is' True ', the error 'Failed to find enough host in all 
Backends with storage medium with storage medium is SSD, need 3'.
 
-### `force_do_metadata_checkpoint`
+If this parameter is' False ', no error is reported when the table is built. 
Instead, the table is built on a disk with 'HDD' as the storage medium
 
-### `forward_to_master`
+### drop_backend_after_decommission
 
-### `frontend_address`
+Default:false
 
-Status: Deprecated, not recommended use. This parameter may be deleted later
-Type: string
-Description: Explicitly set the IP address of FE instead of using 
*InetAddress.getByName* to get the IP address. Usually in 
*InetAddress.getByName* When the expected results cannot be obtained. Only IP 
address is supported, not hostname.
-Default value: 0.0.0.0
+IsMutable:true
 
-### `hadoop_load_default_timeout_second`
+MasterOnly:true
 
-### `heartbeat_mgr_blocking_queue_size`
+1. This configuration is used to control whether the system drops the BE after 
successfully decommissioning the BE. If true, the BE node will be deleted after 
the BE is successfully offline. If false, after the BE successfully goes 
offline, the BE will remain in the DECOMMISSION state, but will not be dropped.
 
-### `heartbeat_mgr_threads_num`
+   This configuration can play a role in certain scenarios. Assume that the 
initial state of a Doris cluster is one disk per BE node. After running for a 
period of time, the system has been vertically expanded, that is, each BE node 
adds 2 new disks. Because Doris currently does not support data balancing among 
the disks within the BE, the data volume of the initial disk may always be much 
higher than the data volume of the newly added disk. At this time, we can 
perform manual inter-disk balancing by the following operations:
 
-### `history_job_keep_max_second`
+   1. Set the configuration item to false.
+   2. Perform a decommission operation on a certain BE node. This operation 
will migrate all data on the BE to other nodes.
+   3. After the decommission operation is completed, the BE will not be 
dropped. At this time, cancel the decommission status of the BE. Then the data 
will start to balance from other BE nodes back to this node. At this time, the 
data will be evenly distributed to all disks of the BE.
+   4. Perform steps 2 and 3 for all BE nodes in sequence, and finally achieve 
the purpose of disk balancing for all nodes
 
-### `http_backlog_num`
+### period_of_auto_resume_min
 
-The backlog_num for netty http server, When you enlarge this backlog_num,
-you should enlarge the value in the linux /proc/sys/net/core/somaxconn file at 
the same time
+Default:5 (s)
 
-### `mysql_nio_backlog_num`
+IsMutable:true
 
-The backlog_num for mysql nio server, When you enlarge this backlog_num,
-you should enlarge the value in the linux /proc/sys/net/core/somaxconn file at 
the same time
+MasterOnly:true
 
-### `http_port`
+Automatically restore the cycle of Routine load
 
-HTTP bind port. Defaults to 8030.
+### max_tolerable_backend_down_num
 
-### `http_max_line_length`
+Default:0
 
-The max length of an HTTP URL. The unit of this configuration is BYTE. 
Defaults to 4096.
+IsMutable:true
 
-### `http_max_header_size`
+MasterOnly:true
 
-The max size of allowed HTTP headers. The unit of this configuration is BYTE. 
Defaults to 8192.
+As long as one BE is down, Routine Load cannot be automatically restored 
 
-### `ignore_meta_check`
+### enable_materialized_view
 
-### `init_connect`
+Default:true
 
-### `insert_load_default_timeout_second`
+IsMutable:true
 
-### `interactive_timeout`
+MasterOnly:true
 
-### `is_report_success`
+This configuration is used to turn on and off the creation of materialized 
views. If set to true, the function to create a materialized view is enabled. 
The user can create a materialized view through the `CREATE MATERIALIZED VIEW` 
command. If set to false, materialized views cannot be created.
+
+If you get an error `The materialized view is coming soon` or `The 
materialized view is disabled` when creating the materialized view, it means 
that the configuration is set to false and the function of creating the 
materialized view is turned off. You can start to create a materialized view by 
modifying the configuration to true.
 
-### `label_clean_interval_second`
+This variable is a dynamic configuration, and users can modify the 
configuration through commands after the FE process starts. You can also modify 
the FE configuration file and restart the FE to take effect
 
-### `label_keep_max_second`
+### check_java_version
 
-### `language`
+Default:false
 
-### `license`
+If set to true, Doris will check whether the compiled and running Java 
versions are compatible
 
-### `load_checker_interval_second`
+### max_running_rollup_job_num_per_table
 
-### `load_etl_thread_num_high_priority`
+Default:1
 
-### `load_etl_thread_num_normal_priority`
+IsMutable:true
 
-### `load_input_size_limit_gb`
+MasterOnly:true
 
-### `load_mem_limit`
+Control the concurrency limit of Rollup jobs
 
-### `load_pending_thread_num_high_priority`
+### dynamic_partition_enable
 
-### `load_pending_thread_num_normal_priority`
+Default:true
 
-### `load_running_job_num_limit`
+IsMutable:true
 
-### `load_straggler_wait_second`
+MasterOnly:true
 
-### `locale`
+Whether to enable dynamic partition, enabled by default
 
-### `log_roll_size_mb`
+### dynamic_partition_check_interval_seconds
 
-### `lower_case_table_names`
+Default:600 (s)
 
-### `master_sync_policy`
+IsMutable:true
 
-### `max_agent_task_threads_num`
+MasterOnly:true
 
-### `max_allowed_in_element_num_of_delete`
+Decide how often to check dynamic partition
 
-This configuration is used to limit element num of InPredicate in delete 
statement. The default value is 1024.
+### disable_cluster_feature
 
-### `max_allowed_packet`
+Default:true
 
-### `max_backend_down_time_second`
+IsMutable:true
 
-### `max_balancing_tablets`
+The multi cluster feature will be deprecated in version 0.12 ,set this config 
to true will disable all operations related to cluster feature, include:
+        create/drop cluster
+        add free backend/add backend to cluster/decommission cluster balance
+        change the backends num of cluster
+        link/migration db
 
-### `max_bdbje_clock_delta_ms`
+### force_do_metadata_checkpoint
 
-### `max_broker_concurrency`
+Default:false
 
-### `max_bytes_per_broker_scanner`
+IsMutable:true
 
-### `max_clone_task_timeout_sec`
+MasterOnly:true
 
-Type: long
-Description: Used to control the maximum timeout of a clone task. The unit is 
second.
-Default value: 7200
-Dynamic modification: yes
+If set to true, the checkpoint thread will make the checkpoint regardless of 
the jvm memory used percent
 
-Can cooperate with `mix_clone_task_timeout_sec` to control the maximum and 
minimum timeout of a clone task. Under normal circumstances, the timeout of a 
clone task is estimated by the amount of data and the minimum transfer rate 
(5MB/s). In some special cases, these two configurations can be used to set the 
upper and lower bounds of the clone task timeout to ensure that the clone task 
can be completed successfully.
+### metadata_checkpoint_memory_threshold
+
+Default:60  (60%)
+
+IsMutable:true
+
+MasterOnly:true
+
+ If the jvm memory used percent(heap or old mem pool) exceed this threshold, 
checkpoint thread will  not work to avoid OOM。
 
-### `max_connection_scheduler_threads_num`
+### max_distribution_pruner_recursion_depth
 
-### `max_conn_per_user`
+Default:100
 
-### `max_create_table_timeout_second`
+IsMutable:true
 
-### `max_distribution_pruner_recursion_depth`
+MasterOnly:false
 
-### `max_layout_length_per_row`
+This will limit the max recursion depth of hash distribution pruner.
+      eg: where a in (5 elements) and b in (4 elements) and c in (3 elements) 
and d in (2 elements).
+      a/b/c/d are distribution columns, so the recursion depth will be 5 * 4 * 
3 * 2 = 120, larger than 100,
+      So that distribution pruner will no work and just return all buckets.
+      Increase the depth can support distribution pruning for more elements, 
but may cost more CPU.
 
-### `max_load_timeout_second`
 
-### `max_query_retry_time`
+### using_old_load_usage_pattern
 
-### `max_routine_load_job_num`
+Default:false
 
-### `max_routine_load_task_concurrent_num`
+IsMutable:true
 
-### `max_routine_load_task_num_per_be`
+MasterOnly:true
 
-### `max_running_rollup_job_num_per_table`
+If set to true, the insert stmt with processing error will still return a 
label to user.  And user can use this label to check the load job's status. The 
default value is false, which means if insert operation encounter errors,  
exception will be thrown to user client directly without load label.
 
-### `max_running_txn_num_per_db`
+### small_file_dir
+
+Default:DORIS_HOME_DIR/small_files
+
+Save small files
+
+### max_small_file_size_bytes
+
+Default:1M
+
+IsMutable:true
+
+MasterOnly:true
+
+The max size of a single file store in SmallFileMgr
+
+### max_small_file_number
+
+Default:100
+
+IsMutable:true
+
+MasterOnly:true
+
+The max number of files store in SmallFileMgr
+
+### max_routine_load_task_num_per_be
+
+Default:5
+
+IsMutable:true
+
+MasterOnly:true
+
+the max concurrent routine load task num per BE.  This is to limit the num of 
routine load tasks sending to a BE, and it should also less than BE config 
'routine_load_thread_pool_size'(default 10), which is the routine load task 
thread pool size on BE.
+
+### max_routine_load_task_concurrent_num
+
+Default:5
+
+IsMutable:true
+
+MasterOnly:true
+
+the max concurrent routine load task num of a single routine load job
+
+### max_routine_load_job_num
+
+Default:100
+
+the max routine load job num, including NEED_SCHEDULED, RUNNING, PAUSE
+
+### max_running_txn_num_per_db
+
+Default:100
+
+IsMutable:true
+
+MasterOnly:true
 
 This configuration is mainly used to control the number of concurrent load 
jobs of the same database.
 
 When there are too many load jobs running in the cluster, the newly submitted 
load jobs may report errors:
 
-```
+```text
 current running txns on db xxx is xx, larger than limit xx
 ```
 
 When this error is encountered, it means that the load jobs currently running 
in the cluster exceeds the configuration value. At this time, it is recommended 
to wait on the business side and retry the load jobs.
 
-Generally it is not recommended to increase this configuration value. An 
excessively high number of concurrency may cause excessive system load.
+Generally it is not recommended to increase this configuration value. An 
excessively high number of concurrency may cause excessive system load
 
-### `max_scheduling_tablets`
+### enable_metric_calculator
 
-### `max_small_file_number`
+Default:true
 
-### `max_small_file_size_bytes`
+If set to true, metric collector will be run as a daemon timer to collect 
metrics at fix interval
 
-### `max_stream_load_timeout_second`
+### report_queue_size
 
-This configuration is specifically used to limit timeout setting for stream 
load. It is to prevent that failed stream load transactions cannot be canceled 
within a short time because of the user's large timeout setting. 
+Default: 100
 
-### `max_tolerable_backend_down_num`
+IsMutable:true
 
-### `max_unfinished_load_job`
+MasterOnly:true
 
-### `metadata_checkopoint_memory_threshold`
+ This threshold is to avoid piling up too many report task in FE, which may 
cause OOM exception.  In some large Doris cluster, eg: 100 Backends with ten 
million replicas, a tablet report may cost  several seconds after some 
modification of metadata(drop partition, etc..). And one Backend will report 
tablets info every 1 min, so unlimited receiving reports is unacceptable. we 
will optimize the processing speed of tablet report in future, but now, just 
discard the report if queue size exceeding limit.
+    Some online time cost:
+       1. disk report: 0-1 msta
+       2. sk report: 0-1 ms
+       3. tablet report
+       4. 10000 replicas: 200ms
 
-### `metadata_failure_recovery`
+### partition_rebalance_max_moves_num_per_selection
 
-### `meta_delay_toleration_second`
+Default:10
 
-### `meta_dir`
+IsMutable:true
 
-Type: string
-Description: Doris meta data will be saved here.The storage of this dir is 
highly recommended as to be:
+MasterOnly:true
 
-* High write performance (SSD)
+Valid only if use PartitionRebalancer,
 
-* Safe (RAID)
+### partition_rebalance_move_expire_after_access
 
-Default value: DORIS_HOME_DIR + "/doris-meta";
+Default:600   (s)
 
-### `meta_publish_timeout_ms`
+IsMutable:true
 
-### `min_bytes_per_broker_scanner`
+MasterOnly:true
 
-### `min_clone_task_timeout_sec`
+Valid only if use PartitionRebalancer. If this changed, cached moves will be 
cleared 
 
-Type: long
-Description: Used to control the minimum timeout of a clone task. The unit is 
second.
-Default value: 180
-Dynamic modification: yes
+### tablet_rebalancer_type
 
-See the description of `max_clone_task_timeout_sec`.
+Default:BeLoad
 
-### `mini_load_default_timeout_second`
+MasterOnly:true
 
-### `min_load_timeout_second`
+Rebalancer type(ignore case): BeLoad, Partition. If type parse failed, use 
BeLoad as default
 
-### `mysql_service_nio_enabled`
+### max_balancing_tablets
 
-Type: bool
-Description: Whether FE starts the MySQL server based on NiO model. It is 
recommended to turn off this option when the query connection is less than 1000 
or the concurrency scenario is not high.
-Default value: true
+Default:100
 
-### `mysql_service_io_threads_num`
+IsMutable:true
 
-Type: int
-Description: When FeEstarts the MySQL server based on NIO model, the number of 
threads responsible for IO events. Only `mysql_service_nio_enabled` is true 
takes effect.
-Default value: 4
+MasterOnly:true
 
-### `max_mysql_service_task_threads_num`
+if the number of balancing tablets in TabletScheduler exceed 
max_balancing_tablets, no more balance check 
 
-Type: int
-Description: When FeEstarts the MySQL server based on NIO model, the number of 
threads responsible for Task events. Only `mysql_service_nio_enabled` is true 
takes effect.
-Default value: 4096
+### max_scheduling_tablets
 
-### `net_buffer_length`
+Default:2000
 
-### `net_read_timeout`
+IsMutable:true
 
-### `net_write_timeout`
+MasterOnly:true
 
-### `parallel_exchange_instance_num`
+if the number of scheduled tablets in TabletScheduler exceed 
max_scheduling_tablets skip checking。
 
-### `parallel_fragment_exec_instance_num`
+### disable_balance
 
-### `period_of_auto_resume_min`
+Default:false
 
-### `plugin_dir`
+IsMutable:true
 
-### `plugin_enable`
+MasterOnly:true
 
-### `priority_networks`
+if set to true, TabletScheduler will not do balance.
 
-### `proxy_auth_enable`
+### balance_load_score_threshold
 
-### `proxy_auth_magic_prefix`
+Default:0.1 (10%)
 
-### `publish_version_interval_ms`
+IsMutable:true
 
-### `publish_version_timeout_second`
+MasterOnly:true
 
-### `qe_max_connection`
+the threshold of cluster balance score, if a backend's load score is 10% lower 
than average score,  this backend will be marked as LOW load, if load score is 
10% higher than average score, HIGH load  will be marked
 
-### `qe_slow_log_ms`
+### schedule_slot_num_per_path
 
-### `query_cache_size`
+Default:2
 
-### `query_cache_type`
+the default slot number per path in tablet scheduler , remove this config and 
dynamically adjust it by clone task statistic
 
-### `query_colocate_join_memory_limit_penalty_factor`
+### tablet_repair_delay_factor_second
 
-### `query_port`
+Default:60 (s)
 
-Type: int
-Description: FE MySQL server port
-Default value: 9030
+IsMutable:true
 
-### `query_timeout`
+MasterOnly:true
 
-### `remote_fragment_exec_timeout_ms`
+the factor of delay time before deciding to repair tablet.  if priority is 
VERY_HIGH, repair it immediately.
 
-### `replica_ack_policy`
+-  HIGH, delay tablet_repair_delay_factor_second * 1;
+-  NORMAL: delay tablet_repair_delay_factor_second * 2;
+-  LOW: delay tablet_repair_delay_factor_second * 3;
 
-### `replica_delay_recovery_second`
+### es_state_sync_interval_second
 
-### `replica_sync_policy`
+Default:10
 
-### `report_queue_size`
+fe will call es api to get es index shard info every 
es_state_sync_interval_secs
 
-### `resource_group`
+### disable_hadoop_load
 
-### `rewrite_count_distinct_to_bitmap_hll`
+Default:false
 
-This variable is a session variable, and the session level takes effect.
+IsMutable:true
+
+MasterOnly:true
 
-+ Type: boolean
-+ Description: **Only for the table of the AGG model**, when the variable is 
true, when the user query contains aggregate functions such as count(distinct 
c1), if the type of the c1 column itself is bitmap, count distnct will be 
rewritten It is bitmap_union_count(c1).
-         When the type of the c1 column itself is hll, count distinct will be 
rewritten as hll_union_agg(c1)
-         If the variable is false, no overwriting occurs.
-+ Default value: true.
+Load using hadoop cluster will be deprecated in future. Set to true to disable 
this kind of load.
 
-### `rpc_port`
+### db_used_data_quota_update_interval_secs
 
-### `schedule_slot_num_per_path`
+Default:300 (s)
 
-### `small_file_dir`
+IsMutable:true
 
-### `SQL_AUTO_IS_NULL`
+MasterOnly:true
 
-### `sql_mode`
+For better data load performance, in the check of whether the amount of data 
used by the database before data load exceeds the quota, we do not calculate 
the amount of data already used by the database in real time, but obtain the 
periodically updated value of the daemon thread.
 
-### `sql_safe_updates`
+This configuration is used to set the time interval for updating the value of 
the amount of data used by the database
 
-### `sql_select_limit`
+### disable_load_job
 
-### `storage_cooldown_second`
+Default:false
 
-### `storage_engine`
+IsMutable:true
 
-### `storage_flood_stage_left_capacity_bytes`
+MasterOnly:true
 
-### `storage_flood_stage_usage_percent`
+if this is set to true
 
-### `storage_high_watermark_usage_percent`
+- all pending load job will failed when call begin txn api
+-  all prepare load job will failed when call commit txn api
+-  all committed load job will waiting to be published 
 
-### `storage_min_left_capacity_bytes`
+### catalog_try_lock_timeout_ms
 
-### `stream_load_default_timeout_second`
+Default:5000  (ms)
 
-### `sys_log_delete_age`
+IsMutable:true
 
-### `sys_log_dir`
+The tryLock timeout configuration of catalog lock.  Normally it does not need 
to change, unless you need to test something.
 
-### `sys_log_level`
+### max_query_retry_time
 
-### `sys_log_roll_interval`
+Default:2
 
-### `sys_log_roll_mode`
+IsMutable:true
 
-### `sys_log_roll_num`
+The number of query retries.  A query may retry if we encounter RPC exception 
and no result has been sent to user.  You may reduce this number to avoid 
Avalanche disaster
 
-### `sys_log_verbose_modules`
+### remote_fragment_exec_timeout_ms
 
-### `system_time_zone`
+Default:5000  (ms)
 
-### `tablet_create_timeout_second`
+IsMutable:true
 
-### `tablet_delete_timeout_second`
+The timeout of executing async remote fragment.  In normal case, the async 
remote fragment will be executed in a short time. If system are under high load 
condition,try to set this timeout longer.
 
-### `tablet_repair_delay_factor_second`
+### enable_local_replica_selection
 
-### `tablet_stat_update_interval_second`
+Default:false
 
-### `test_materialized_view`
+IsMutable:true
 
-### `thrift_backlog_num`
+If set to true, Planner will try to select replica of tablet on same host as 
this Frontend. This may reduce network transmission in following case: 
 
-### `thrift_client_timeout_ms`
+-  N hosts with N Backends and N Frontends deployed. 
+- The data has N replicas. 
+-  High concurrency queries are syyuyuient to all Frontends evenly 
+-  In this case, all Frontends can only use local replicas to do the query.
 
-The connection timeout and socket timeout config for thrift server.
+### max_unfinished_load_job
 
-The value for thrift_client_timeout_ms is set to be larger than zero to 
prevent some hang up problems in java.net.SocketInputStream.socketRead0.
+Default:1000
 
-### `thrift_server_max_worker_threads`
+IsMutable:true
 
-### `time_zone`
+MasterOnly:true
 
-### `tmp_dir`
+ Max number of load jobs, include PENDING、ETL、LOADING、QUORUM_FINISHED. If 
exceed this number, load job is not allowed to be submitted
 
-### `transaction_clean_interval_second`
+### max_bytes_per_broker_scanner
 
-### `tx_isolation`
+Default:3 * 1024 * 1024 * 1024L  (3G)
 
-### `txn_rollback_limit`
+IsMutable:true
 
-### `use_new_tablet_scheduler`
+MasterOnly:true
 
-### `use_v2_rollup`
+Max bytes a broker scanner can process in one broker load job. Commonly, each 
Backends has one broker scanner.
 
-### `using_old_load_usage_pattern`
+### enable_auth_check
 
-### `Variable Info`
+Default:true
 
-### `version`
+if set to false, auth check will be disable, in case some goes wrong with the 
new privilege system.
 
-### `version_comment`
+### tablet_stat_update_interval_second
 
-### `wait_timeout`
+Default:300,(5min)
 
-### `with_k8s_certs`
+update interval of tablet stat , All frontends will get tablet stat from all 
backends at each interval
 
-### `enable_strict_storage_medium_check`
+### storage_flood_stage_usage_percent  
 
-This configuration indicates that when the table is being built, it checks for 
the presence of the appropriate storage medium in the cluster. For example, 
when the user specifies that the storage medium is' SSD 'when the table is 
built, but only' HDD 'disks exist in the cluster,
+Default:95 (95%)
 
-If this parameter is' True ', the error 'Failed to find enough host in all 
Backends with storage medium with storage medium is SSD, need 3'.
+IsMutable:true
 
-If this parameter is' False ', no error is reported when the table is built. 
Instead, the table is built on a disk with 'HDD' as the storage medium.
+MasterOnly:true
 
-### `thrift_server_type`
+###  storage_flood_stage_left_capacity_bytes
 
-This configuration represents the service model used by The Thrift Service of 
FE, is of type String and is case-insensitive.
+Default:
 
-If this parameter is 'SIMPLE', then the 'TSimpleServer' model is used, which 
is generally not suitable for production and is limited to test use.
+       storage_flood_stage_usage_percent  : 95  (95%)
+       
+       storage_flood_stage_left_capacity_bytes :  1 * 1024 * 1024 * 1024 (1GB)
 
-If the parameter is 'THREADED', then the 'TThreadedSelectorServer' model is 
used, which is a non-blocking I/O model, namely the master-slave Reactor model, 
which can timely respond to a large number of concurrent connection requests 
and performs well in most scenarios.
+IsMutable:true
 
-If this parameter is `THREAD_POOL`, then the `TThreadPoolServer` model is 
used, the model for blocking I/O model, use the thread pool to handle user 
connections, the number of simultaneous connections are limited by the number 
of thread pool, if we can estimate the number of concurrent requests in 
advance, and tolerant enough thread resources cost, this model will have a 
better performance, the service model is used by default.
+MasterOnly:true
 
-### `cache_enable_sql_mode`
+If capacity of disk reach the 'storage_flood_stage_usage_percent' and  
'storage_flood_stage_left_capacity_bytes', the following operation will be 
rejected: 
 
-If this switch is turned on, the SQL query result set will be cached. If the 
interval between the last visit version time in all partitions of all tables in 
the query is greater than cache_last_version_interval_second, and the result 
set is less than cache_result_max_row_count, the result set will be cached, and 
the next same SQL will hit the cache.
+1. .load job
+2. restore job
 
-### `cache_enable_partition_mode`
+### storage_high_watermark_usage_percent
 
-When this switch is turned on, the query result set will be cached according 
to the partition. If the interval between the query table partition time and 
the query time is less than cache_last_version_interval_second, the result set 
will be cached according to the partition.
+Default:85  (85%)
 
-Part of the data will be obtained from the cache and some data from the disk 
when querying, and the data will be merged and returned to the client.
+IsMutable:true
+
+MasterOnly:true
+
+### storage_min_left_capacity_bytes
+
+Default: 2 * 1024 * 1024 * 1024  (2GB)
+
+IsMutable:true
+
+MasterOnly:true
+
+ 'storage_high_watermark_usage_percent' limit the max capacity usage percent 
of a Backend storage path.  'storage_min_left_capacity_bytes' limit the minimum 
left capacity of a Backend storage path.  If both limitations are reached, this 
storage path can not be chose as tablet balance destination. But for tablet 
recovery, we may exceed these limit for keeping data integrity as much as 
possible.
+
+### backup_job_default_timeout_ms
+
+Default:86400 * 1000  (1day)
+
+IsMutable:true
+
+MasterOnly:true
+
+default timeout of backup job
+
+### with_k8s_certs
+
+Default:false
+
+If use k8s deploy manager locally, set this to true and prepare the certs files
+
+### dpp_hadoop_client_path
+
+Default:/lib/hadoop-client/hadoop/bin/hadoop
+
+### dpp_bytes_per_reduce
+
+Default:100 * 1024 * 1024L;   // 100M
+
+### dpp_default_cluster
+
+Default:palo-dpp
+
+### dpp_default_config_str
+
+Default:{
+           "hadoop_configs : '"
+             "mapred.job.priority=NORMAL;"
+            "mapred.job.map.capacity=50;"
+            "mapred.job.reduce.capacity=50;"
+            "mapred.hce.replace.streaming=false;"
+            "abaci.long.stored.job=true;"
+            "dce.shuffle.enable=false;"
+            "dfs.client.authserver.force_stop=true;"
+            "dfs.client.auth.method=0"
+            "'}
+
+### dpp_config_str
+
+Default:{palo-dpp : {"
+            + "hadoop_palo_path : '/dir',"
+                        + "hadoop_configs : '"
+                        + "fs.default.name=hdfs://host:port;"
+                                    + "mapred.job.tracker=host:port;"
+                                    + "hadoop.job.ugi=user,password"
+                                                + "'}"
+                                                + "}
+
+### enable_deploy_manager
+
+Default:disable
+
+ Set to true if you deploy Palo using thirdparty deploy manager Valid options 
are:
+
+- disable:    no deploy manager 
+-  k8s:        Kubernetes 
+- ambari:     Ambari 
+- local:      Local File (for test or Boxer2 BCC version)
+
+### enable_token_check
+
+Default:true
+
+For forward compatibility, will be removed later. check token when download 
image file。
+
+### expr_depth_limit
+
+Default:3000
+
+IsMutable:true
+
+Limit on the depth of an expr tree.  Exceed this limit may cause long analysis 
time while holding db read lock.  Do not set this if you know what you are doing
+
+### expr_children_limit
+
+Default:10000
+
+IsMutable:true
+
+Limit on the number of expr children of an expr tree.  Exceed this limit may 
cause long analysis time while holding database read lock.  Do not set this if 
you know what you are doing.。

Review comment:
       additional 。

##########
File path: docs/en/administrator-guide/config/fe_config.md
##########
@@ -122,681 +122,1873 @@ There are two ways to configure FE configuration items:
 
 ## Configurations
 
-### `agent_task_resend_wait_time_ms`
+### max_dynamic_partition_num
 
-This configuration will decide whether to resend agent task when create_time 
for agent_task is set, only when current_time - create_time > 
agent_task_resend_wait_time_ms can ReportHandler do resend agent task.     
+Default:500
 
-This configuration is currently mainly used to solve the problem of repeated 
sending of `PUBLISH_VERSION` agent tasks. The current default value of this 
configuration is 5000, which is an experimental value.
+IsMutable:true
 
-Because there is a certain time delay between submitting agent tasks to 
AgentTaskQueue and submitting to be, Increasing the value of this configuration 
can effectively solve the problem of repeated sending of agent tasks,
+MasterOnly:true
 
-But at the same time, it will cause the submission of failed or failed 
execution of the agent task to be executed again for an extended period of time.
+Used to limit the maximum number of partitions that can be created when 
creating a dynamic partition table,  to avoid creating too many partitions at 
one time. The number is determined by "start" and "end" in the dynamic 
partition parameters.。 
 
-### `alter_table_timeout_second`
+### grpc_max_message_size_bytes
 
-### `async_load_task_pool_size`
+Default:1G
 
-This configuration is just for compatible with old version, this config has 
been replaced by async_loading_load_task_pool_size, it will be removed in the 
future.
+Used to set the initial flow window size of the GRPC client channel, and also 
used to max message size.  When the result set is large, you may need to 
increase this value.
 
-### `async_loading_load_task_pool_size`
+### enable_outfile_to_local
 
-The loading_load task executor pool size. This pool size limits the max 
running loading_load tasks.
+Default:false
+Whether to allow the outfile function to export the results to the local disk.
 
-Currently, it only limits the loading_load task of broker load.
+### enable_access_file_without_broker
 
-### `async_pending_load_task_pool_size`
+Default:false
 
-The pending_load task executor pool size. This pool size limits the max 
running pending_load tasks.
+IsMutable:true
 
-Currently, it only limits the pending_load task of broker load and spark load.
+MasterOnly:true
 
-It should be less than 'max_running_txn_num_per_db'
+This config is used to try skip broker when access bos or other cloud storage 
via broker
 
-### `audit_log_delete_age`
+### enable_bdbje_debug_mode
 
-### `audit_log_dir`
+Default:false
 
-### `audit_log_modules`
+If set to true, FE will be started in BDBJE debug mode
 
-### `audit_log_roll_interval`
+### enable_fe_heartbeat_by_thrift
 
-### `audit_log_roll_mode`
+Default:false
 
-### `audit_log_roll_num`
+IsMutable:true
 
-### `auth_token`
+MasterOnly:true
 
-### `autocommit`
+This config is used to solve fe heartbeat response read_timeout problem,  When 
config is set to be true, master will get fe heartbeat response by thrift 
protocol  instead of http protocol. In order to maintain compatibility with the 
old version,  the default is false, and the configuration cannot be changed to 
true until all fe are upgraded。
 
-### `auto_increment_increment`
+### enable_alpha_rowset
 
-### `backup_job_default_timeout_ms`
+Default:false
 
-### `backup_plugin_path`
+Whether to support the creation of alpha rowset tables.  The default is false 
and should only be used in emergency situations,  this config should be remove 
in some future version
 
-### `balance_load_score_threshold`
+### enable_http_server_v2
 
-### `batch_size`
+Default:The default is true after the official 0.14.0 version is released, and 
the default is false before
 
-### `bdbje_heartbeat_timeout_second`
+HTTP Server V2 is implemented by SpringBoot. It uses an architecture that 
separates the front and back ends. Only when httpv2 is enabled can users use 
the new front-end UI interface
 
-### `bdbje_lock_timeout_second`
+### http_max_file_size
 
-### `bdbje_replica_ack_timeout_second`
+### http_max_request_size
 
-Metadata will be synchronously written to multiple Follower FEs. This 
parameter is used to control the timeout period for Master FE to wait for 
Follower FE to send ack. When the written data is large, the ack time may be 
longer. If it times out, the metadata writing will fail and the FE process will 
exit. At this time, you can increase this parameter appropriately.
+Default:100M
 
-Default: 10 seconds.
+The above two parameters are the http v2 version, the web maximum upload file 
limit, the default is 100M, you can modify it according to your needs
 
-### `broker_load_default_timeout_second`
+### frontend_address
 
-### `brpc_idle_wait_max_time`
+Status: Deprecated, not recommended use. This parameter may be deleted later 
Type: string Description: Explicitly set the IP address of FE instead of using 
*InetAddress.getByName* to get the IP address. Usually in 
*InetAddress.getByName* When the expected results cannot be obtained. Only IP 
address is supported, not hostname. Default value: 0.0.0.0
 
-### `brpc_number_of_concurrent_requests_processed`
+### default_max_filter_ratio
 
-### `capacity_used_percent_high_water`
+Default:0
 
-### `catalog_trash_expire_second`
+IsMutable:true
 
-### `catalog_try_lock_timeout_ms`
+MasterOnly:true
 
-### `character_set_client`
+Maximum percentage of data that can be filtered (due to reasons such as data 
is irregularly) , The default value is 0.
 
-### `character_set_connection`
+### default_db_data_quota_bytes
 
-### `character_set_results`
+Default:1TB
 
-### `character_set_server`
+IsMutable:true
 
-### `check_consistency_default_timeout_second`
+MasterOnly:true
 
-### `check_java_version`
+Used to set the default database data quota size. To set the quota size of a 
single database, you can use:
 
-### `clone_capacity_balance_threshold`
+```
+Set the database data quota, the unit is:B/K/KB/M/MB/G/GB/T/TB/P/PB
+ALTER DATABASE db_name SET DATA QUOTA quota;
+View configuration
+show data (Detail:HELP SHOW DATA)
+```
 
-### `clone_checker_interval_second`
+### enable_batch_delete_by_default
 
-### `clone_distribution_balance_threshold`
+Default:false
 
-### `clone_high_priority_delay_second`
+IsMutable:true
 
-### `clone_job_timeout_second`
+MasterOnly:true
 
-### `clone_low_priority_delay_second`
+Whether to add a delete sign column when create unique table
 
-### `clone_max_job_num`
+### recover_with_empty_tablet
 
-### `clone_normal_priority_delay_second`
+Default:false
 
-### `cluster_id`
+IsMutable:true
 
-### `cluster_name`
+MasterOnly:true
 
-### `codegen_level`
+ In some very special circumstances, such as code bugs, or human misoperation, 
etc., all replicas of some tablets may be lost. In this case, the data has been 
substantially lost. However, in some scenarios, the business still hopes to 
ensure that the query will not report errors even if there is data loss, and 
reduce the perception of the user layer. At this point, we can use the blank 
Tablet to fill the missing replica to ensure that the query can be executed 
normally.
 
-### `collation_connection`
+Set to true so that Doris will automatically use blank replicas to fill 
tablets which all replicas have been damaged or missing
 
-### `collation_database`
+### max_allowed_in_element_num_of_delete
 
-### `collation_server`
+Default:1024
 
-### `consistency_check_end_time`
+IsMutable:true
 
-### `consistency_check_start_time`
+MasterOnly:true
 
-### `custom_config_dir`
+This configuration is used to limit element num of InPredicate in delete 
statement.
 
-Configure the location of the `fe_custom.conf` file. The default is in the 
`conf/` directory.
+### cache_result_max_row_count
 
-In some deployment environments, the `conf/` directory may be overwritten due 
to system upgrades. This will cause the user modified configuration items to be 
overwritten. At this time, we can store `fe_custom.conf` in another specified 
directory to prevent the configuration file from being overwritten.
+Default:3000
 
-### `db_used_data_quota_update_interval_secs`
+IsMutable:true
 
-For better data load performance, in the check of whether the amount of data 
used by the database before data load exceeds the quota, we do not calculate 
the amount of data already used by the database in real time, but obtain the 
periodically updated value of the daemon thread.
+MasterOnly:false
 
-This configuration is used to set the time interval for updating the value of 
the amount of data used by the database.
+In order to avoid occupying too much memory, the maximum number of rows that 
can be cached is 2000 by default. If this threshold is exceeded, the cache 
cannot be set
 
-### `default_rowset_type`
+### cache_last_version_interval_second
 
-### `default_storage_medium`
+Default:900
 
-### `delete_thread_num`
+IsMutable:true
 
-### `desired_max_waiting_jobs`
+MasterOnly:false
 
-### `disable_balance`
+The time interval of the latest partitioned version of the table refers to the 
time interval between the data update and the current version. It is generally 
set to 900 seconds, which distinguishes offline and real-time import
 
-### `disable_cluster_feature`
+### cache_enable_partition_mode
 
-### `disable_colocate_balance`
+Default:true
 
-### `disable_colocate_join`
+IsMutable:true
 
-### `disable_colocate_relocate`
+MasterOnly:false
 
-### `disable_hadoop_load`
+When this switch is turned on, the query result set will be cached according 
to the partition. If the interval between the query table partition time and 
the query time is less than cache_last_version_interval_second, the result set 
will be cached according to the partition.
 
-### `disable_load_job`
+Part of the data will be obtained from the cache and some data from the disk 
when querying, and the data will be merged and returned to the client.
 
-### `disable_streaming_preaggregations`
+### cache_enable_sql_mode
 
-### `div_precision_increment`
+Default:true
 
-### `dpp_bytes_per_reduce`
+IsMutable:true
 
-### `dpp_config_str`
+MasterOnly:false
 
-### `dpp_default_cluster`
+If this switch is turned on, the SQL query result set will be cached. If the 
interval between the last visit version time in all partitions of all tables in 
the query is greater than cache_last_version_interval_second, and the result 
set is less than cache_result_max_row_count, the result set will be cached, and 
the next same SQL will hit the cache
 
-### `dpp_default_config_str`
+If set to true, fe will enable sql result caching. This option is suitable for 
offline data update scenarios
 
-### `dpp_hadoop_client_path`
+|                        | case1 | case2 | case3 | case4 |
+| ---------------------- | ----- | ----- | ----- | ----- |
+| enable_sql_cache       | false | true  | true  | false |
+| enable_partition_cache | false | false | true  | true  |
 
-### `drop_backend_after_decommission`
+### min_clone_task_timeout_sec  和 max_clone_task_timeout_sec
 
-This configuration is used to control whether the system drops the BE after 
successfully decommissioning the BE. If true, the BE node will be deleted after 
the BE is successfully offline. If false, after the BE successfully goes 
offline, the BE will remain in the DECOMMISSION state, but will not be dropped.
+Default:Minimum 3 minutes, maximum two hours
 
-This configuration can play a role in certain scenarios. Assume that the 
initial state of a Doris cluster is one disk per BE node. After running for a 
period of time, the system has been vertically expanded, that is, each BE node 
adds 2 new disks. Because Doris currently does not support data balancing among 
the disks within the BE, the data volume of the initial disk may always be much 
higher than the data volume of the newly added disk. At this time, we can 
perform manual inter-disk balancing by the following operations:
+IsMutable:true
 
-1. Set the configuration item to false.
-2. Perform a decommission operation on a certain BE node. This operation will 
migrate all data on the BE to other nodes.
-3. After the decommission operation is completed, the BE will not be dropped. 
At this time, cancel the decommission status of the BE. Then the data will 
start to balance from other BE nodes back to this node. At this time, the data 
will be evenly distributed to all disks of the BE.
-4. Perform steps 2 and 3 for all BE nodes in sequence, and finally achieve the 
purpose of disk balancing for all nodes.
+MasterOnly:true
 
-### `dynamic_partition_check_interval_seconds`
+Type: long Description: Used to control the maximum timeout of a clone task. 
The unit is second. Default value: 7200 Dynamic modification: yes
 
-### `dynamic_partition_enable`
+Can cooperate with `mix_clone_task_timeout_sec` to control the maximum and 
minimum timeout of a clone task. Under normal circumstances, the timeout of a 
clone task is estimated by the amount of data and the minimum transfer rate 
(5MB/s). In some special cases, these two configurations can be used to set the 
upper and lower bounds of the clone task timeout to ensure that the clone task 
can be completed successfully.
 
-### `edit_log_port`
+### agent_task_resend_wait_time_ms
 
-### `edit_log_roll_num`
+Default:5000
 
-### `edit_log_type`
+IsMutable:true
 
-### `enable_auth_check`
+MasterOnly:true
 
-### `enable_batch_delete_by_default`
-Whether to add a delete sign column when create unique table
+This configuration will decide whether to resend agent task when create_time 
for agent_task is set, only when current_time - create_time > 
agent_task_resend_wait_time_ms can ReportHandler do resend agent task.
 
-### `enable_deploy_manager`
+This configuration is currently mainly used to solve the problem of repeated 
sending of `PUBLISH_VERSION` agent tasks. The current default value of this 
configuration is 5000, which is an experimental value.
 
-### `enable_insert_strict`
+Because there is a certain time delay between submitting agent tasks to 
AgentTaskQueue and submitting to be, Increasing the value of this configuration 
can effectively solve the problem of repeated sending of agent tasks,
 
-### `enable_local_replica_selection`
+But at the same time, it will cause the submission of failed or failed 
execution of the agent task to be executed again for an extended period of time
 
-### `enable_materialized_view`
+### enable_odbc_table
 
-This configuration is used to turn on and off the creation of materialized 
views. If set to true, the function to create a materialized view is enabled. 
The user can create a materialized view through the `CREATE MATERIALIZED VIEW` 
command. If set to false, materialized views cannot be created.
+Default:false
 
-If you get an error `The materialized view is coming soon` or `The 
materialized view is disabled` when creating the materialized view, it means 
that the configuration is set to false and the function of creating the 
materialized view is turned off. You can start to create a materialized view by 
modifying the configuration to true.
+IsMutable:true
 
-This variable is a dynamic configuration, and users can modify the 
configuration through commands after the FE process starts. You can also modify 
the FE configuration file and restart the FE to take effect.
+MasterOnly:true
 
-### `enable_metric_calculator`
+Whether to enable the ODBC table, it is not enabled by default. You need to 
manually configure it when you use it. This parameter can be set by: ADMIN SET 
FRONTEND CONFIG("key"="value")
 
-### `enable_spilling`
+### enable_spark_load
 
-### `enable_token_check`
+Default:false
 
-### `es_state_sync_interval_second`
+IsMutable:true
 
-### `event_scheduler`
+MasterOnly:true
 
-### `exec_mem_limit`
+Whether to enable spark load temporarily, it is not enabled by default
 
-### `export_checker_interval_second`
+### enable_strict_storage_medium_check
 
-### `export_running_job_num_limit`
+Default:false
 
-### `export_tablet_num_per_task`
+IsMutable:true
 
-### `export_task_default_timeout_second`
+MasterOnly:true
 
-### `expr_children_limit`
+This configuration indicates that when the table is being built, it checks for 
the presence of the appropriate storage medium in the cluster. For example, 
when the user specifies that the storage medium is' SSD 'when the table is 
built, but only' HDD 'disks exist in the cluster,
 
-### `expr_depth_limit`
+If this parameter is' True ', the error 'Failed to find enough host in all 
Backends with storage medium with storage medium is SSD, need 3'.
 
-### `force_do_metadata_checkpoint`
+If this parameter is' False ', no error is reported when the table is built. 
Instead, the table is built on a disk with 'HDD' as the storage medium
 
-### `forward_to_master`
+### drop_backend_after_decommission
 
-### `frontend_address`
+Default:false
 
-Status: Deprecated, not recommended use. This parameter may be deleted later
-Type: string
-Description: Explicitly set the IP address of FE instead of using 
*InetAddress.getByName* to get the IP address. Usually in 
*InetAddress.getByName* When the expected results cannot be obtained. Only IP 
address is supported, not hostname.
-Default value: 0.0.0.0
+IsMutable:true
 
-### `hadoop_load_default_timeout_second`
+MasterOnly:true
 
-### `heartbeat_mgr_blocking_queue_size`
+1. This configuration is used to control whether the system drops the BE after 
successfully decommissioning the BE. If true, the BE node will be deleted after 
the BE is successfully offline. If false, after the BE successfully goes 
offline, the BE will remain in the DECOMMISSION state, but will not be dropped.
 
-### `heartbeat_mgr_threads_num`
+   This configuration can play a role in certain scenarios. Assume that the 
initial state of a Doris cluster is one disk per BE node. After running for a 
period of time, the system has been vertically expanded, that is, each BE node 
adds 2 new disks. Because Doris currently does not support data balancing among 
the disks within the BE, the data volume of the initial disk may always be much 
higher than the data volume of the newly added disk. At this time, we can 
perform manual inter-disk balancing by the following operations:
 
-### `history_job_keep_max_second`
+   1. Set the configuration item to false.
+   2. Perform a decommission operation on a certain BE node. This operation 
will migrate all data on the BE to other nodes.
+   3. After the decommission operation is completed, the BE will not be 
dropped. At this time, cancel the decommission status of the BE. Then the data 
will start to balance from other BE nodes back to this node. At this time, the 
data will be evenly distributed to all disks of the BE.
+   4. Perform steps 2 and 3 for all BE nodes in sequence, and finally achieve 
the purpose of disk balancing for all nodes
 
-### `http_backlog_num`
+### period_of_auto_resume_min
 
-The backlog_num for netty http server, When you enlarge this backlog_num,
-you should enlarge the value in the linux /proc/sys/net/core/somaxconn file at 
the same time
+Default:5 (s)
 
-### `mysql_nio_backlog_num`
+IsMutable:true
 
-The backlog_num for mysql nio server, When you enlarge this backlog_num,
-you should enlarge the value in the linux /proc/sys/net/core/somaxconn file at 
the same time
+MasterOnly:true
 
-### `http_port`
+Automatically restore the cycle of Routine load
 
-HTTP bind port. Defaults to 8030.
+### max_tolerable_backend_down_num
 
-### `http_max_line_length`
+Default:0
 
-The max length of an HTTP URL. The unit of this configuration is BYTE. 
Defaults to 4096.
+IsMutable:true
 
-### `http_max_header_size`
+MasterOnly:true
 
-The max size of allowed HTTP headers. The unit of this configuration is BYTE. 
Defaults to 8192.
+As long as one BE is down, Routine Load cannot be automatically restored 
 
-### `ignore_meta_check`
+### enable_materialized_view
 
-### `init_connect`
+Default:true
 
-### `insert_load_default_timeout_second`
+IsMutable:true
 
-### `interactive_timeout`
+MasterOnly:true
 
-### `is_report_success`
+This configuration is used to turn on and off the creation of materialized 
views. If set to true, the function to create a materialized view is enabled. 
The user can create a materialized view through the `CREATE MATERIALIZED VIEW` 
command. If set to false, materialized views cannot be created.
+
+If you get an error `The materialized view is coming soon` or `The 
materialized view is disabled` when creating the materialized view, it means 
that the configuration is set to false and the function of creating the 
materialized view is turned off. You can start to create a materialized view by 
modifying the configuration to true.
 
-### `label_clean_interval_second`
+This variable is a dynamic configuration, and users can modify the 
configuration through commands after the FE process starts. You can also modify 
the FE configuration file and restart the FE to take effect
 
-### `label_keep_max_second`
+### check_java_version
 
-### `language`
+Default:false
 
-### `license`
+If set to true, Doris will check whether the compiled and running Java 
versions are compatible
 
-### `load_checker_interval_second`
+### max_running_rollup_job_num_per_table
 
-### `load_etl_thread_num_high_priority`
+Default:1
 
-### `load_etl_thread_num_normal_priority`
+IsMutable:true
 
-### `load_input_size_limit_gb`
+MasterOnly:true
 
-### `load_mem_limit`
+Control the concurrency limit of Rollup jobs
 
-### `load_pending_thread_num_high_priority`
+### dynamic_partition_enable
 
-### `load_pending_thread_num_normal_priority`
+Default:true
 
-### `load_running_job_num_limit`
+IsMutable:true
 
-### `load_straggler_wait_second`
+MasterOnly:true
 
-### `locale`
+Whether to enable dynamic partition, enabled by default
 
-### `log_roll_size_mb`
+### dynamic_partition_check_interval_seconds
 
-### `lower_case_table_names`
+Default:600 (s)
 
-### `master_sync_policy`
+IsMutable:true
 
-### `max_agent_task_threads_num`
+MasterOnly:true
 
-### `max_allowed_in_element_num_of_delete`
+Decide how often to check dynamic partition
 
-This configuration is used to limit element num of InPredicate in delete 
statement. The default value is 1024.
+### disable_cluster_feature
 
-### `max_allowed_packet`
+Default:true
 
-### `max_backend_down_time_second`
+IsMutable:true
 
-### `max_balancing_tablets`
+The multi cluster feature will be deprecated in version 0.12 ,set this config 
to true will disable all operations related to cluster feature, include:
+        create/drop cluster
+        add free backend/add backend to cluster/decommission cluster balance
+        change the backends num of cluster
+        link/migration db
 
-### `max_bdbje_clock_delta_ms`
+### force_do_metadata_checkpoint
 
-### `max_broker_concurrency`
+Default:false
 
-### `max_bytes_per_broker_scanner`
+IsMutable:true
 
-### `max_clone_task_timeout_sec`
+MasterOnly:true
 
-Type: long
-Description: Used to control the maximum timeout of a clone task. The unit is 
second.
-Default value: 7200
-Dynamic modification: yes
+If set to true, the checkpoint thread will make the checkpoint regardless of 
the jvm memory used percent
 
-Can cooperate with `mix_clone_task_timeout_sec` to control the maximum and 
minimum timeout of a clone task. Under normal circumstances, the timeout of a 
clone task is estimated by the amount of data and the minimum transfer rate 
(5MB/s). In some special cases, these two configurations can be used to set the 
upper and lower bounds of the clone task timeout to ensure that the clone task 
can be completed successfully.
+### metadata_checkpoint_memory_threshold
+
+Default:60  (60%)
+
+IsMutable:true
+
+MasterOnly:true
+
+ If the jvm memory used percent(heap or old mem pool) exceed this threshold, 
checkpoint thread will  not work to avoid OOM。
 
-### `max_connection_scheduler_threads_num`
+### max_distribution_pruner_recursion_depth
 
-### `max_conn_per_user`
+Default:100
 
-### `max_create_table_timeout_second`
+IsMutable:true
 
-### `max_distribution_pruner_recursion_depth`
+MasterOnly:false
 
-### `max_layout_length_per_row`
+This will limit the max recursion depth of hash distribution pruner.
+      eg: where a in (5 elements) and b in (4 elements) and c in (3 elements) 
and d in (2 elements).
+      a/b/c/d are distribution columns, so the recursion depth will be 5 * 4 * 
3 * 2 = 120, larger than 100,
+      So that distribution pruner will no work and just return all buckets.
+      Increase the depth can support distribution pruning for more elements, 
but may cost more CPU.
 
-### `max_load_timeout_second`
 
-### `max_query_retry_time`
+### using_old_load_usage_pattern
 
-### `max_routine_load_job_num`
+Default:false
 
-### `max_routine_load_task_concurrent_num`
+IsMutable:true
 
-### `max_routine_load_task_num_per_be`
+MasterOnly:true
 
-### `max_running_rollup_job_num_per_table`
+If set to true, the insert stmt with processing error will still return a 
label to user.  And user can use this label to check the load job's status. The 
default value is false, which means if insert operation encounter errors,  
exception will be thrown to user client directly without load label.
 
-### `max_running_txn_num_per_db`
+### small_file_dir
+
+Default:DORIS_HOME_DIR/small_files
+
+Save small files
+
+### max_small_file_size_bytes
+
+Default:1M
+
+IsMutable:true
+
+MasterOnly:true
+
+The max size of a single file store in SmallFileMgr
+
+### max_small_file_number
+
+Default:100
+
+IsMutable:true
+
+MasterOnly:true
+
+The max number of files store in SmallFileMgr
+
+### max_routine_load_task_num_per_be
+
+Default:5
+
+IsMutable:true
+
+MasterOnly:true
+
+the max concurrent routine load task num per BE.  This is to limit the num of 
routine load tasks sending to a BE, and it should also less than BE config 
'routine_load_thread_pool_size'(default 10), which is the routine load task 
thread pool size on BE.
+
+### max_routine_load_task_concurrent_num
+
+Default:5
+
+IsMutable:true
+
+MasterOnly:true
+
+the max concurrent routine load task num of a single routine load job
+
+### max_routine_load_job_num
+
+Default:100
+
+the max routine load job num, including NEED_SCHEDULED, RUNNING, PAUSE
+
+### max_running_txn_num_per_db
+
+Default:100
+
+IsMutable:true
+
+MasterOnly:true
 
 This configuration is mainly used to control the number of concurrent load 
jobs of the same database.
 
 When there are too many load jobs running in the cluster, the newly submitted 
load jobs may report errors:
 
-```
+```text
 current running txns on db xxx is xx, larger than limit xx
 ```
 
 When this error is encountered, it means that the load jobs currently running 
in the cluster exceeds the configuration value. At this time, it is recommended 
to wait on the business side and retry the load jobs.
 
-Generally it is not recommended to increase this configuration value. An 
excessively high number of concurrency may cause excessive system load.
+Generally it is not recommended to increase this configuration value. An 
excessively high number of concurrency may cause excessive system load
 
-### `max_scheduling_tablets`
+### enable_metric_calculator
 
-### `max_small_file_number`
+Default:true
 
-### `max_small_file_size_bytes`
+If set to true, metric collector will be run as a daemon timer to collect 
metrics at fix interval
 
-### `max_stream_load_timeout_second`
+### report_queue_size
 
-This configuration is specifically used to limit timeout setting for stream 
load. It is to prevent that failed stream load transactions cannot be canceled 
within a short time because of the user's large timeout setting. 
+Default: 100
 
-### `max_tolerable_backend_down_num`
+IsMutable:true
 
-### `max_unfinished_load_job`
+MasterOnly:true
 
-### `metadata_checkopoint_memory_threshold`
+ This threshold is to avoid piling up too many report task in FE, which may 
cause OOM exception.  In some large Doris cluster, eg: 100 Backends with ten 
million replicas, a tablet report may cost  several seconds after some 
modification of metadata(drop partition, etc..). And one Backend will report 
tablets info every 1 min, so unlimited receiving reports is unacceptable. we 
will optimize the processing speed of tablet report in future, but now, just 
discard the report if queue size exceeding limit.
+    Some online time cost:
+       1. disk report: 0-1 msta
+       2. sk report: 0-1 ms
+       3. tablet report
+       4. 10000 replicas: 200ms
 
-### `metadata_failure_recovery`
+### partition_rebalance_max_moves_num_per_selection
 
-### `meta_delay_toleration_second`
+Default:10
 
-### `meta_dir`
+IsMutable:true
 
-Type: string
-Description: Doris meta data will be saved here.The storage of this dir is 
highly recommended as to be:
+MasterOnly:true
 
-* High write performance (SSD)
+Valid only if use PartitionRebalancer,
 
-* Safe (RAID)
+### partition_rebalance_move_expire_after_access
 
-Default value: DORIS_HOME_DIR + "/doris-meta";
+Default:600   (s)
 
-### `meta_publish_timeout_ms`
+IsMutable:true
 
-### `min_bytes_per_broker_scanner`
+MasterOnly:true
 
-### `min_clone_task_timeout_sec`
+Valid only if use PartitionRebalancer. If this changed, cached moves will be 
cleared 
 
-Type: long
-Description: Used to control the minimum timeout of a clone task. The unit is 
second.
-Default value: 180
-Dynamic modification: yes
+### tablet_rebalancer_type
 
-See the description of `max_clone_task_timeout_sec`.
+Default:BeLoad
 
-### `mini_load_default_timeout_second`
+MasterOnly:true
 
-### `min_load_timeout_second`
+Rebalancer type(ignore case): BeLoad, Partition. If type parse failed, use 
BeLoad as default
 
-### `mysql_service_nio_enabled`
+### max_balancing_tablets
 
-Type: bool
-Description: Whether FE starts the MySQL server based on NiO model. It is 
recommended to turn off this option when the query connection is less than 1000 
or the concurrency scenario is not high.
-Default value: true
+Default:100
 
-### `mysql_service_io_threads_num`
+IsMutable:true
 
-Type: int
-Description: When FeEstarts the MySQL server based on NIO model, the number of 
threads responsible for IO events. Only `mysql_service_nio_enabled` is true 
takes effect.
-Default value: 4
+MasterOnly:true
 
-### `max_mysql_service_task_threads_num`
+if the number of balancing tablets in TabletScheduler exceed 
max_balancing_tablets, no more balance check 
 
-Type: int
-Description: When FeEstarts the MySQL server based on NIO model, the number of 
threads responsible for Task events. Only `mysql_service_nio_enabled` is true 
takes effect.
-Default value: 4096
+### max_scheduling_tablets
 
-### `net_buffer_length`
+Default:2000
 
-### `net_read_timeout`
+IsMutable:true
 
-### `net_write_timeout`
+MasterOnly:true
 
-### `parallel_exchange_instance_num`
+if the number of scheduled tablets in TabletScheduler exceed 
max_scheduling_tablets skip checking。
 
-### `parallel_fragment_exec_instance_num`
+### disable_balance
 
-### `period_of_auto_resume_min`
+Default:false
 
-### `plugin_dir`
+IsMutable:true
 
-### `plugin_enable`
+MasterOnly:true
 
-### `priority_networks`
+if set to true, TabletScheduler will not do balance.
 
-### `proxy_auth_enable`
+### balance_load_score_threshold
 
-### `proxy_auth_magic_prefix`
+Default:0.1 (10%)
 
-### `publish_version_interval_ms`
+IsMutable:true
 
-### `publish_version_timeout_second`
+MasterOnly:true
 
-### `qe_max_connection`
+the threshold of cluster balance score, if a backend's load score is 10% lower 
than average score,  this backend will be marked as LOW load, if load score is 
10% higher than average score, HIGH load  will be marked
 
-### `qe_slow_log_ms`
+### schedule_slot_num_per_path
 
-### `query_cache_size`
+Default:2
 
-### `query_cache_type`
+the default slot number per path in tablet scheduler , remove this config and 
dynamically adjust it by clone task statistic
 
-### `query_colocate_join_memory_limit_penalty_factor`
+### tablet_repair_delay_factor_second
 
-### `query_port`
+Default:60 (s)
 
-Type: int
-Description: FE MySQL server port
-Default value: 9030
+IsMutable:true
 
-### `query_timeout`
+MasterOnly:true
 
-### `remote_fragment_exec_timeout_ms`
+the factor of delay time before deciding to repair tablet.  if priority is 
VERY_HIGH, repair it immediately.
 
-### `replica_ack_policy`
+-  HIGH, delay tablet_repair_delay_factor_second * 1;
+-  NORMAL: delay tablet_repair_delay_factor_second * 2;
+-  LOW: delay tablet_repair_delay_factor_second * 3;
 
-### `replica_delay_recovery_second`
+### es_state_sync_interval_second
 
-### `replica_sync_policy`
+Default:10
 
-### `report_queue_size`
+fe will call es api to get es index shard info every 
es_state_sync_interval_secs
 
-### `resource_group`
+### disable_hadoop_load
 
-### `rewrite_count_distinct_to_bitmap_hll`
+Default:false
 
-This variable is a session variable, and the session level takes effect.
+IsMutable:true
+
+MasterOnly:true
 
-+ Type: boolean
-+ Description: **Only for the table of the AGG model**, when the variable is 
true, when the user query contains aggregate functions such as count(distinct 
c1), if the type of the c1 column itself is bitmap, count distnct will be 
rewritten It is bitmap_union_count(c1).
-         When the type of the c1 column itself is hll, count distinct will be 
rewritten as hll_union_agg(c1)
-         If the variable is false, no overwriting occurs.
-+ Default value: true.
+Load using hadoop cluster will be deprecated in future. Set to true to disable 
this kind of load.
 
-### `rpc_port`
+### db_used_data_quota_update_interval_secs
 
-### `schedule_slot_num_per_path`
+Default:300 (s)
 
-### `small_file_dir`
+IsMutable:true
 
-### `SQL_AUTO_IS_NULL`
+MasterOnly:true
 
-### `sql_mode`
+For better data load performance, in the check of whether the amount of data 
used by the database before data load exceeds the quota, we do not calculate 
the amount of data already used by the database in real time, but obtain the 
periodically updated value of the daemon thread.
 
-### `sql_safe_updates`
+This configuration is used to set the time interval for updating the value of 
the amount of data used by the database
 
-### `sql_select_limit`
+### disable_load_job
 
-### `storage_cooldown_second`
+Default:false
 
-### `storage_engine`
+IsMutable:true
 
-### `storage_flood_stage_left_capacity_bytes`
+MasterOnly:true
 
-### `storage_flood_stage_usage_percent`
+if this is set to true
 
-### `storage_high_watermark_usage_percent`
+- all pending load job will failed when call begin txn api
+-  all prepare load job will failed when call commit txn api
+-  all committed load job will waiting to be published 
 
-### `storage_min_left_capacity_bytes`
+### catalog_try_lock_timeout_ms
 
-### `stream_load_default_timeout_second`
+Default:5000  (ms)
 
-### `sys_log_delete_age`
+IsMutable:true
 
-### `sys_log_dir`
+The tryLock timeout configuration of catalog lock.  Normally it does not need 
to change, unless you need to test something.
 
-### `sys_log_level`
+### max_query_retry_time
 
-### `sys_log_roll_interval`
+Default:2
 
-### `sys_log_roll_mode`
+IsMutable:true
 
-### `sys_log_roll_num`
+The number of query retries.  A query may retry if we encounter RPC exception 
and no result has been sent to user.  You may reduce this number to avoid 
Avalanche disaster
 
-### `sys_log_verbose_modules`
+### remote_fragment_exec_timeout_ms
 
-### `system_time_zone`
+Default:5000  (ms)
 
-### `tablet_create_timeout_second`
+IsMutable:true
 
-### `tablet_delete_timeout_second`
+The timeout of executing async remote fragment.  In normal case, the async 
remote fragment will be executed in a short time. If system are under high load 
condition,try to set this timeout longer.
 
-### `tablet_repair_delay_factor_second`
+### enable_local_replica_selection
 
-### `tablet_stat_update_interval_second`
+Default:false
 
-### `test_materialized_view`
+IsMutable:true
 
-### `thrift_backlog_num`
+If set to true, Planner will try to select replica of tablet on same host as 
this Frontend. This may reduce network transmission in following case: 
 
-### `thrift_client_timeout_ms`
+-  N hosts with N Backends and N Frontends deployed. 
+- The data has N replicas. 
+-  High concurrency queries are syyuyuient to all Frontends evenly 
+-  In this case, all Frontends can only use local replicas to do the query.
 
-The connection timeout and socket timeout config for thrift server.
+### max_unfinished_load_job
 
-The value for thrift_client_timeout_ms is set to be larger than zero to 
prevent some hang up problems in java.net.SocketInputStream.socketRead0.
+Default:1000
 
-### `thrift_server_max_worker_threads`
+IsMutable:true
 
-### `time_zone`
+MasterOnly:true
 
-### `tmp_dir`
+ Max number of load jobs, include PENDING、ETL、LOADING、QUORUM_FINISHED. If 
exceed this number, load job is not allowed to be submitted
 
-### `transaction_clean_interval_second`
+### max_bytes_per_broker_scanner
 
-### `tx_isolation`
+Default:3 * 1024 * 1024 * 1024L  (3G)
 
-### `txn_rollback_limit`
+IsMutable:true
 
-### `use_new_tablet_scheduler`
+MasterOnly:true
 
-### `use_v2_rollup`
+Max bytes a broker scanner can process in one broker load job. Commonly, each 
Backends has one broker scanner.
 
-### `using_old_load_usage_pattern`
+### enable_auth_check
 
-### `Variable Info`
+Default:true
 
-### `version`
+if set to false, auth check will be disable, in case some goes wrong with the 
new privilege system.
 
-### `version_comment`
+### tablet_stat_update_interval_second
 
-### `wait_timeout`
+Default:300,(5min)
 
-### `with_k8s_certs`
+update interval of tablet stat , All frontends will get tablet stat from all 
backends at each interval
 
-### `enable_strict_storage_medium_check`
+### storage_flood_stage_usage_percent  
 
-This configuration indicates that when the table is being built, it checks for 
the presence of the appropriate storage medium in the cluster. For example, 
when the user specifies that the storage medium is' SSD 'when the table is 
built, but only' HDD 'disks exist in the cluster,
+Default:95 (95%)
 
-If this parameter is' True ', the error 'Failed to find enough host in all 
Backends with storage medium with storage medium is SSD, need 3'.
+IsMutable:true
 
-If this parameter is' False ', no error is reported when the table is built. 
Instead, the table is built on a disk with 'HDD' as the storage medium.
+MasterOnly:true
 
-### `thrift_server_type`
+###  storage_flood_stage_left_capacity_bytes
 
-This configuration represents the service model used by The Thrift Service of 
FE, is of type String and is case-insensitive.
+Default:
 
-If this parameter is 'SIMPLE', then the 'TSimpleServer' model is used, which 
is generally not suitable for production and is limited to test use.
+       storage_flood_stage_usage_percent  : 95  (95%)
+       
+       storage_flood_stage_left_capacity_bytes :  1 * 1024 * 1024 * 1024 (1GB)
 
-If the parameter is 'THREADED', then the 'TThreadedSelectorServer' model is 
used, which is a non-blocking I/O model, namely the master-slave Reactor model, 
which can timely respond to a large number of concurrent connection requests 
and performs well in most scenarios.
+IsMutable:true
 
-If this parameter is `THREAD_POOL`, then the `TThreadPoolServer` model is 
used, the model for blocking I/O model, use the thread pool to handle user 
connections, the number of simultaneous connections are limited by the number 
of thread pool, if we can estimate the number of concurrent requests in 
advance, and tolerant enough thread resources cost, this model will have a 
better performance, the service model is used by default.
+MasterOnly:true
 
-### `cache_enable_sql_mode`
+If capacity of disk reach the 'storage_flood_stage_usage_percent' and  
'storage_flood_stage_left_capacity_bytes', the following operation will be 
rejected: 
 
-If this switch is turned on, the SQL query result set will be cached. If the 
interval between the last visit version time in all partitions of all tables in 
the query is greater than cache_last_version_interval_second, and the result 
set is less than cache_result_max_row_count, the result set will be cached, and 
the next same SQL will hit the cache.
+1. .load job
+2. restore job
 
-### `cache_enable_partition_mode`
+### storage_high_watermark_usage_percent
 
-When this switch is turned on, the query result set will be cached according 
to the partition. If the interval between the query table partition time and 
the query time is less than cache_last_version_interval_second, the result set 
will be cached according to the partition.
+Default:85  (85%)
 
-Part of the data will be obtained from the cache and some data from the disk 
when querying, and the data will be merged and returned to the client.
+IsMutable:true
+
+MasterOnly:true
+
+### storage_min_left_capacity_bytes
+
+Default: 2 * 1024 * 1024 * 1024  (2GB)
+
+IsMutable:true
+
+MasterOnly:true
+
+ 'storage_high_watermark_usage_percent' limit the max capacity usage percent 
of a Backend storage path.  'storage_min_left_capacity_bytes' limit the minimum 
left capacity of a Backend storage path.  If both limitations are reached, this 
storage path can not be chose as tablet balance destination. But for tablet 
recovery, we may exceed these limit for keeping data integrity as much as 
possible.
+
+### backup_job_default_timeout_ms
+
+Default:86400 * 1000  (1day)
+
+IsMutable:true
+
+MasterOnly:true
+
+default timeout of backup job
+
+### with_k8s_certs
+
+Default:false
+
+If use k8s deploy manager locally, set this to true and prepare the certs files
+
+### dpp_hadoop_client_path
+
+Default:/lib/hadoop-client/hadoop/bin/hadoop
+
+### dpp_bytes_per_reduce
+
+Default:100 * 1024 * 1024L;   // 100M
+
+### dpp_default_cluster
+
+Default:palo-dpp
+
+### dpp_default_config_str
+
+Default:{
+           "hadoop_configs : '"
+             "mapred.job.priority=NORMAL;"
+            "mapred.job.map.capacity=50;"
+            "mapred.job.reduce.capacity=50;"
+            "mapred.hce.replace.streaming=false;"
+            "abaci.long.stored.job=true;"
+            "dce.shuffle.enable=false;"
+            "dfs.client.authserver.force_stop=true;"
+            "dfs.client.auth.method=0"
+            "'}
+
+### dpp_config_str
+
+Default:{palo-dpp : {"
+            + "hadoop_palo_path : '/dir',"
+                        + "hadoop_configs : '"
+                        + "fs.default.name=hdfs://host:port;"
+                                    + "mapred.job.tracker=host:port;"
+                                    + "hadoop.job.ugi=user,password"
+                                                + "'}"
+                                                + "}
+
+### enable_deploy_manager
+
+Default:disable
+
+ Set to true if you deploy Palo using thirdparty deploy manager Valid options 
are:
+
+- disable:    no deploy manager 
+-  k8s:        Kubernetes 
+- ambari:     Ambari 
+- local:      Local File (for test or Boxer2 BCC version)
+
+### enable_token_check
+
+Default:true
+
+For forward compatibility, will be removed later. check token when download 
image file。
+
+### expr_depth_limit
+
+Default:3000
+
+IsMutable:true
+
+Limit on the depth of an expr tree.  Exceed this limit may cause long analysis 
time while holding db read lock.  Do not set this if you know what you are doing
+
+### expr_children_limit
+
+Default:10000
+
+IsMutable:true
+
+Limit on the number of expr children of an expr tree.  Exceed this limit may 
cause long analysis time while holding database read lock.  Do not set this if 
you know what you are doing.。

Review comment:
       additional 。




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to