himanshug commented on a change in pull request #9407: query laning and load 
shedding
URL: https://github.com/apache/druid/pull/9407#discussion_r390549892
 
 

 ##########
 File path: docs/configuration/index.md
 ##########
 @@ -1481,9 +1481,35 @@ These Broker configurations can be defined in the 
`broker/runtime.properties` fi
 |`druid.broker.select.tier`|`highestPriority`, `lowestPriority`, `custom`|If 
segments are cross-replicated across tiers in a cluster, you can tell the 
broker to prefer to select segments in a tier with a certain 
priority.|`highestPriority`|
 |`druid.broker.select.tier.custom.priorities`|`An array of integer 
priorities.`|Select servers in tiers with a custom priority list.|None|
 
+##### Query laning
+
+*Laning strategies* allow you to control capacity utilization for 
heterogeneous query workloads. With laning, the broker examines and classifies 
a query for the purpose of assigning it to a 'lane'. Lanes have capacity 
limits, enforced by the broker, that can be used to ensure sufficient resources 
are available for other lanes or for interactive queries (with no lane), or to 
limit overall throughput for queries within the lane. Requests in excess of the 
capacity are discarded with an HTTP 429 status code.
+
+|Property|Description|Default|
+|--------|-----------|-------|
+|`druid.query.scheduler.numThreads`|Maximum number of HTTP threads to dedicate 
to query processing. To save HTTP thread capacity, this should be lower than 
`druid.server.http.numThreads`.|Unbounded|
 
 Review comment:
   thanks, I understand the  reasoning now.
   lookup end points already have a QoS filter to never consume more than two 
threads from jetty, I wonder if in this world it makes sense to setup QoS 
filter for non-query endpoints(say hardcoded to 2) so that we can ensure that  
they don't end up consuming more jetty threads than intended.
   then default  `druid.query.scheduler.numThreads = 
druid.server.http.numThreads - numReservedForOthers=4` and users would likely 
never be expected to touch `druid.query.scheduler.numThreads` .
   
   Major behavior change with lane usage is really losing the queuing  of 
requests to handle spikes and instead sending 429s immediately.  In future, we  
could introduce mechanism to maintain statically/dynamically sized [per lane] 
waiting  queue ourselves as well along with concurrency limits in lane strategy.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to