himanshug commented on a change in pull request #9407: query laning and load 
shedding
URL: https://github.com/apache/druid/pull/9407#discussion_r391343403
 
 

 ##########
 File path: docs/configuration/index.md
 ##########
 @@ -1481,9 +1481,35 @@ These Broker configurations can be defined in the 
`broker/runtime.properties` fi
 |`druid.broker.select.tier`|`highestPriority`, `lowestPriority`, `custom`|If 
segments are cross-replicated across tiers in a cluster, you can tell the 
broker to prefer to select segments in a tier with a certain 
priority.|`highestPriority`|
 |`druid.broker.select.tier.custom.priorities`|`An array of integer 
priorities.`|Select servers in tiers with a custom priority list.|None|
 
+##### Query laning
+
+*Laning strategies* allow you to control capacity utilization for 
heterogeneous query workloads. With laning, the broker examines and classifies 
a query for the purpose of assigning it to a 'lane'. Lanes have capacity 
limits, enforced by the broker, that can be used to ensure sufficient resources 
are available for other lanes or for interactive queries (with no lane), or to 
limit overall throughput for queries within the lane. Requests in excess of the 
capacity are discarded with an HTTP 429 status code.
+
+|Property|Description|Default|
+|--------|-----------|-------|
+|`druid.query.scheduler.numThreads`|Maximum number of HTTP threads to dedicate 
to query processing. To save HTTP thread capacity, this should be lower than 
`druid.server.http.numThreads`.|Unbounded|
 
 Review comment:
   It is fine to let user provide `druid.query.scheduler.numThreads` and 
compute `druid.server.http.numThreads` , just that one of those should not be 
touched by user in most cases.
   
   There are few advantages in maintaining the queues ourselves and not letting 
jetty do it.
   - we have no control over jetty queue, if a request is dropped then end  
user sees that as a TCP connection close and not a HTTP 429. So, to client, it 
is not clear whether to retry or backoff.
   - we don't know how much time request waited  in jetty queue, consequently 
request time metrics don't account for that.
   - jetty queue is [probably] static in size, if we managed it ourselves then 
we have the option of keeping dynamically sized queues and do potentially other 
cool things.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to