maytasm commented on a change in pull request #11440:
URL: https://github.com/apache/druid/pull/11440#discussion_r669325399



##########
File path: docs/configuration/index.md
##########
@@ -1015,6 +1015,7 @@ There are additional configs for autoscaling (if it is 
enabled):
 |`druid.indexer.autoscale.pendingTaskTimeout`|How long a task can be in 
"pending" state before the Overlord tries to scale up.|PT30S|
 |`druid.indexer.autoscale.workerVersion`|If set, will only create nodes of set 
version during autoscaling. Overrides dynamic configuration. |null|
 |`druid.indexer.autoscale.workerPort`|The port that MiddleManagers will run 
on.|8080|
+|`druid.indexer.autoscale.workerCapacityFallback`| Worker capacity for 
determining the number of workers needed for auto scaling when there is 
currently no worker running. If unset or set to value of 0 or less, auto scaler 
will scale to `minNumWorkers` in autoScaler config instead. Note: this config 
is only applicable to `pendingTaskBased` provisioning strategy|-1|

Review comment:
       Changed the config name to `workerCapacityHint`. I agree that for 
homogeneous cluster we can say that this value should be typically equal to 
druid.worker.capacity. For heterogeneous current behavior with non-zero worker 
node running, the auto scaler will just pick one random node capacity and use 
that to determine the number of worker needed (not the avg). Advice to set to 
avg here would be different to what the auto scaler use when there are actually 
worker. Although I guess the avg might be better and the auto scaler could use 
the avg when there are running worker nodes too.  




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to