kfaraz commented on code in PR #13503:
URL: https://github.com/apache/druid/pull/13503#discussion_r1040577738


##########
docs/configuration/index.md:
##########
@@ -1112,6 +1112,8 @@ These Overlord static configurations can be defined in 
the `overlord/runtime.pro
 |`druid.indexer.storage.type`|Choices are "local" or "metadata". Indicates 
whether incoming tasks should be stored locally (in heap) or in metadata 
storage. "local" is mainly for internal testing while "metadata" is recommended 
in production because storing incoming tasks in metadata storage allows for 
tasks to be resumed if the Overlord should fail.|local|
 |`druid.indexer.storage.recentlyFinishedThreshold`|Duration of time to store 
task results. Default is 24 hours. If you have hundreds of tasks running in a 
day, consider increasing this threshold.|PT24H|
 |`druid.indexer.tasklock.forceTimeChunkLock`|_**Setting this to false is still 
experimental**_<br/> If set, all tasks are enforced to use time chunk lock. If 
not set, each task automatically chooses a lock type to use. This configuration 
can be overwritten by setting `forceTimeChunkLock` in the [task 
context](../ingestion/tasks.md#context). See [Task Locking & 
Priority](../ingestion/tasks.md#context) for more details about locking in 
tasks.|true|
+|`druid.indexer.tasklock.batchSegmentAllocation`| If set to true, segment 
allocate actions are performed in batches to improve the throughput and reduce 
the average `task/action/run/time`.|false|
+|`druid.indexer.tasklock.batchAllocationWaitTime`|Milliseconds to wait between 
adding the first segment allocate action to a batch and executing that batch. 
The waiting time allows the batch to add more requests and thus improve the 
average segment allocation run time. This configuration takes effect only if 
`batchSegmentAllocation` is enabled. <br> This value should be decreased __only 
if__ there are failures while allocating segments due to metadata operations on 
a very large batch.|500|

Review Comment:
   Yes, we can limit the batch size to a (hard-coded) max value, say 500. We 
have seen clusters where batches of 700-800 have executed successfully.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to