This is an automated email from the ASF dual-hosted git repository.

abhishek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new bd0080c4ce Update default values in docs (#14233)
bd0080c4ce is described below

commit bd0080c4cec6067def6928e8f00444a335de01e6
Author: Kashif Faraz <[email protected]>
AuthorDate: Tue May 9 19:13:51 2023 +0530

    Update default values in docs (#14233)
---
 docs/configuration/index.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/configuration/index.md b/docs/configuration/index.md
index ea15380260..f5d739dd4f 100644
--- a/docs/configuration/index.md
+++ b/docs/configuration/index.md
@@ -961,7 +961,7 @@ Issuing a GET request at the same URL will return the spec 
that is currently in
 |`killDataSourceWhitelist`|List of specific data sources for which kill tasks 
are sent if property `druid.coordinator.kill.on` is true. This can be a list of 
comma-separated data source names or a JSON array.|none|
 |`killPendingSegmentsSkipList`|List of data sources for which pendingSegments 
are _NOT_ cleaned up if property `druid.coordinator.kill.pendingSegments.on` is 
true. This can be a list of comma-separated data sources or a JSON array.|none|
 |`maxSegmentsInNodeLoadingQueue`|The maximum number of segments that could be 
queued for loading to any given server. This parameter could be used to speed 
up segments loading process, especially if there are "slow" nodes in the 
cluster (with low loading speed) or if too much segments scheduled to be 
replicated to some particular node (faster loading could be preferred to better 
segments distribution). Desired value depends on segments loading speed, 
acceptable replication time and numbe [...]
-|`useRoundRobinSegmentAssignment`|Boolean flag for whether segments should be 
assigned to historicals in a round robin fashion. When disabled, segment 
assignment is done using the chosen balancer strategy. When enabled, this can 
speed up segment assignments leaving balancing to move the segments to their 
optimal locations (based on the balancer strategy) lazily. |false|
+|`useRoundRobinSegmentAssignment`|Boolean flag for whether segments should be 
assigned to historicals in a round robin fashion. When disabled, segment 
assignment is done using the chosen balancer strategy. When enabled, this can 
speed up segment assignments leaving balancing to move the segments to their 
optimal locations (based on the balancer strategy) lazily. |true|
 |`decommissioningNodes`| List of historical servers to 'decommission'. 
Coordinator will not assign new segments to 'decommissioning' servers,  and 
segments will be moved away from them to be placed on non-decommissioning 
servers at the maximum rate specified by 
`decommissioningMaxPercentOfMaxSegmentsToMove`.|none|
 |`decommissioningMaxPercentOfMaxSegmentsToMove`| Upper limit of segments the 
Coordinator can move from decommissioning servers to active non-decommissioning 
servers during a single run. This value is relative to the total maximum number 
of segments that can be moved at any given time based upon the value of 
`maxSegmentsToMove`.<br /><br />If 
`decommissioningMaxPercentOfMaxSegmentsToMove` is 0, the Coordinator does not 
move segments to decommissioning servers, effectively putting them in  [...]
 |`pauseCoordination`| Boolean flag for whether or not the coordinator should 
execute its various duties of coordinating the cluster. Setting this to true 
essentially pauses all coordination work while allowing the API to remain up. 
Duties that are paused include all classes that implement the `CoordinatorDuty` 
Interface. Such duties include: Segment balancing, Segment compaction, Emission 
of metrics controlled by the dynamic coordinator config `emitBalancingStats`, 
Submitting kill tasks  [...]
@@ -1116,7 +1116,7 @@ These Overlord static configurations can be defined in 
the `overlord/runtime.pro
 |`druid.indexer.storage.type`|Choices are "local" or "metadata". Indicates 
whether incoming tasks should be stored locally (in heap) or in metadata 
storage. "local" is mainly for internal testing while "metadata" is recommended 
in production because storing incoming tasks in metadata storage allows for 
tasks to be resumed if the Overlord should fail.|local|
 |`druid.indexer.storage.recentlyFinishedThreshold`|Duration of time to store 
task results. Default is 24 hours. If you have hundreds of tasks running in a 
day, consider increasing this threshold.|PT24H|
 |`druid.indexer.tasklock.forceTimeChunkLock`|_**Setting this to false is still 
experimental**_<br/> If set, all tasks are enforced to use time chunk lock. If 
not set, each task automatically chooses a lock type to use. This configuration 
can be overwritten by setting `forceTimeChunkLock` in the [task 
context](../ingestion/tasks.md#context). See [Task Locking & 
Priority](../ingestion/tasks.md#context) for more details about locking in 
tasks.|true|
-|`druid.indexer.tasklock.batchSegmentAllocation`| If set to true, Druid 
performs segment allocate actions in batches to improve throughput and reduce 
the average `task/action/run/time`. See [batching `segmentAllocate` 
actions](../ingestion/tasks.md#batching-segmentallocate-actions) for 
details.|false|
+|`druid.indexer.tasklock.batchSegmentAllocation`| If set to true, Druid 
performs segment allocate actions in batches to improve throughput and reduce 
the average `task/action/run/time`. See [batching `segmentAllocate` 
actions](../ingestion/tasks.md#batching-segmentallocate-actions) for 
details.|true|
 |`druid.indexer.tasklock.batchAllocationWaitTime`|Number of milliseconds after 
Druid adds the first segment allocate action to a batch, until it executes the 
batch. Allows the batch to add more requests and improve the average segment 
allocation run time. This configuration takes effect only if 
`batchSegmentAllocation` is enabled.|500|
 |`druid.indexer.task.default.context`|Default task context that is applied to 
all tasks submitted to the Overlord. Any default in this config does not 
override neither the context values the user provides nor 
`druid.indexer.tasklock.forceTimeChunkLock`.|empty context|
 |`druid.indexer.queue.maxSize`|Maximum number of active tasks at one 
time.|Integer.MAX_VALUE|


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to