This is an automated email from the ASF dual-hosted git repository.

kfaraz pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new 51104e8bb31 Docs: Remove references to Zk-based segment loading 
(#16360)
51104e8bb31 is described below

commit 51104e8bb31bf0a54396a33d06cfe5b74a274446
Author: Kashif Faraz <[email protected]>
AuthorDate: Wed May 1 08:06:00 2024 +0530

    Docs: Remove references to Zk-based segment loading (#16360)
    
    Follow up to #15705
    
    Changes:
    - Remove references to ZK-based segment loading in the docs
    - Fix doc for existing config 
`druid.coordinator.loadqueuepeon.http.repeatDelay`
---
 docs/configuration/index.md                    | 12 ++----------
 docs/design/zookeeper.md                       | 18 +++---------------
 docs/development/extensions-core/kubernetes.md |  3 +--
 3 files changed, 6 insertions(+), 27 deletions(-)

diff --git a/docs/configuration/index.md b/docs/configuration/index.md
index 5f4c9902360..6b898792252 100644
--- a/docs/configuration/index.md
+++ b/docs/configuration/index.md
@@ -160,7 +160,6 @@ Druid interacts with ZooKeeper through a set of standard 
path configurations. We
 |`druid.zk.paths.propertiesPath`|ZooKeeper properties 
path.|`${druid.zk.paths.base}/properties`|
 |`druid.zk.paths.announcementsPath`|Druid service announcement 
path.|`${druid.zk.paths.base}/announcements`|
 |`druid.zk.paths.liveSegmentsPath`|Current path for where Druid services 
announce their segments.|`${druid.zk.paths.base}/segments`|
-|`druid.zk.paths.loadQueuePath`|Entries here cause Historical services to load 
and drop segments.|`${druid.zk.paths.base}/loadQueue`|
 |`druid.zk.paths.coordinatorPath`|Used by the Coordinator for leader 
election.|`${druid.zk.paths.base}/coordinator`|
 |`druid.zk.paths.servedSegmentsPath`|Deprecated. Legacy path for where Druid 
services announce their segments.|`${druid.zk.paths.base}/servedSegments`|
 
@@ -875,7 +874,8 @@ These Coordinator static configurations can be defined in 
the `coordinator/runti
 |`druid.coordinator.kill.maxSegments`|The number of unused segments to kill 
per kill task. This number must be greater than 0. This only applies when 
`druid.coordinator.kill.on=true`.|100|
 |`druid.coordinator.balancer.strategy`|Specify the type of balancing strategy 
for the Coordinator to use to distribute segments among the Historical 
services. `cachingCost` is logically equivalent to `cost` but is more 
CPU-efficient on large clusters. `diskNormalized` weights the costs according 
to the servers' disk usage ratios - there are known issues with this strategy 
distributing segments unevenly across the cluster. `random` distributes 
segments among services randomly.|`cost`|
 |`druid.coordinator.balancer.cachingCost.awaitInitialization`|Whether to wait 
for segment view initialization before creating the `cachingCost` balancing 
strategy. This property is enabled only when 
`druid.coordinator.balancer.strategy` is `cachingCost`. If set to true, the 
Coordinator will not start to assign segments, until the segment view is 
initialized. If set to false, the Coordinator will fallback to use the `cost` 
balancing strategy only if the segment view is not initialized yet [...]
-|`druid.coordinator.loadqueuepeon.repeatDelay`|The start and repeat delay for 
the `loadqueuepeon`, which manages the load and drop of segments.|`PT0.050S` 
(50 ms)|
+|`druid.coordinator.loadqueuepeon.http.repeatDelay`|The start and repeat delay 
(in milliseconds) for the load queue peon, which manages the load/drop queue of 
segments for any server.|1 minute|
+|`druid.coordinator.loadqueuepeon.http.batchSize`|Number of segment load/drop 
requests to batch in one HTTP request. Note that it must be smaller than 
`druid.segmentCache.numLoadingThreads` config on Historical service.|1|
 |`druid.coordinator.asOverlord.enabled`|Boolean value for whether this 
Coordinator service should act like an Overlord as well. This configuration 
allows users to simplify a Druid cluster by not having to deploy any standalone 
Overlord services. If set to true, then Overlord console is available at 
`http://coordinator-host:port/console.html` and be sure to set 
`druid.coordinator.asOverlord.overlordService` also.|false|
 |`druid.coordinator.asOverlord.overlordService`| Required, if 
`druid.coordinator.asOverlord.enabled` is `true`. This must be same value as 
`druid.service` on standalone Overlord services and 
`druid.selectors.indexing.serviceName` on Middle Managers.|NULL|
 |`druid.centralizedDatasourceSchema.enabled`|Boolean flag for enabling 
datasource schema building on the Coordinator. Note, when using MiddleManager 
to launch task, set 
`druid.indexer.fork.property.druid.centralizedDatasourceSchema.enabled` in 
MiddleManager runtime config. |false|
@@ -905,15 +905,8 @@ These Coordinator static configurations can be defined in 
the `coordinator/runti
 |Property|Possible values|Description|Default|
 |--------|---------------|-----------|-------|
 |`druid.serverview.type`|batch or http|Segment discovery method to use. "http" 
enables discovering segments using HTTP instead of ZooKeeper.|http|
-|`druid.coordinator.loadqueuepeon.type`|curator or http|Implementation to use 
to assign segment loads and drops to historicals. Curator-based implementation 
is now deprecated, so you should transition to using HTTP-based segment 
assignments.|http|
 |`druid.coordinator.segment.awaitInitializationOnStart`|true or false|Whether 
the Coordinator will wait for its view of segments to fully initialize before 
starting up. If set to 'true', the Coordinator's HTTP server will not start up, 
and the Coordinator will not announce itself as available, until the server 
view is initialized.|true|
 
-###### Additional config when "http" loadqueuepeon is used
-
-|Property|Description|Default|
-|--------|-----------|-------|
-|`druid.coordinator.loadqueuepeon.http.batchSize`|Number of segment load/drop 
requests to batch in one HTTP request. Note that it must be smaller than 
`druid.segmentCache.numLoadingThreads` config on Historical service.|1|
-
 ##### Metadata retrieval
 
 |Property|Description|Default|
@@ -1653,7 +1646,6 @@ These Historical configurations can be defined in the 
`historical/runtime.proper
 |`druid.segmentCache.numLoadingThreads`|How many segments to drop or load 
concurrently from deep storage. Note that the work of loading segments involves 
downloading segments from deep storage, decompressing them and loading them to 
a memory mapped location. So the work is not all I/O Bound. Depending on CPU 
and network load, one could possibly increase this config to a higher 
value.|max(1,Number of cores / 6)|
 |`druid.segmentCache.numBootstrapThreads`|How many segments to load 
concurrently during historical startup.|`druid.segmentCache.numLoadingThreads`|
 |`druid.segmentCache.lazyLoadOnStart`|Whether or not to load segment columns 
metadata lazily during historical startup. When set to true, Historical startup 
time will be dramatically improved by deferring segment loading until the first 
time that segment takes part in a query, which will incur this cost 
instead.|false|
-|`druid.coordinator.loadqueuepeon.curator.numCallbackThreads`|Number of 
threads for executing callback actions associated with loading or dropping of 
segments. One might want to increase this number when noticing clusters are 
lagging behind w.r.t. balancing segments across historical nodes.|2|
 |`druid.segmentCache.numThreadsToLoadSegmentsIntoPageCacheOnDownload`|Number 
of threads to asynchronously read segment index files into null output stream 
on each new segment download after the Historical service finishes 
bootstrapping. Recommended to set to 1 or 2 or leave unspecified to disable. 
See also 
`druid.segmentCache.numThreadsToLoadSegmentsIntoPageCacheOnBootstrap`|0|
 |`druid.segmentCache.numThreadsToLoadSegmentsIntoPageCacheOnBootstrap`|Number 
of threads to asynchronously read segment index files into null output stream 
during Historical service bootstrap. This thread pool is terminated after 
Historical service finishes bootstrapping. Recommended to set to half of 
available cores. If left unspecified, 
`druid.segmentCache.numThreadsToLoadSegmentsIntoPageCacheOnDownload` will be 
used. If both configs are unspecified, this feature is disabled. Preemptiv [...]
 
diff --git a/docs/design/zookeeper.md b/docs/design/zookeeper.md
index 8f1a19dff7e..50241bd3d9d 100644
--- a/docs/design/zookeeper.md
+++ b/docs/design/zookeeper.md
@@ -31,6 +31,7 @@ Apache Druid supports ZooKeeper versions 3.5.x and above.
 
 :::info
  Note: Starting with Apache Druid 0.22.0, support for ZooKeeper 3.4.x has been 
removed
+ Starting with Apache Druid 31.0.0, support for Zookeeper-based segment 
loading has been removed.
 :::
 
 ## ZooKeeper Operations
@@ -39,9 +40,8 @@ The operations that happen over ZK are
 
 1.  [Coordinator](../design/coordinator.md) leader election
 2.  Segment "publishing" protocol from [Historical](../design/historical.md)
-3.  Segment load/drop protocol between [Coordinator](../design/coordinator.md) 
and [Historical](../design/historical.md)
-4.  [Overlord](../design/overlord.md) leader election
-5.  [Overlord](../design/overlord.md) and 
[MiddleManager](../design/middlemanager.md) task management
+3.  [Overlord](../design/overlord.md) leader election
+4.  [Overlord](../design/overlord.md) and 
[MiddleManager](../design/middlemanager.md) task management
 
 ## Coordinator Leader Election
 
@@ -74,15 +74,3 @@ 
${druid.zk.paths.servedSegmentsPath}/${druid.host}/_segment_identifier_
 ```
 
 Processes like the [Coordinator](../design/coordinator.md) and 
[Broker](../design/broker.md) can then watch these paths to see which processes 
are currently serving which segments.
-
-## Segment load/drop protocol between Coordinator and Historical
-
-The `loadQueuePath` is used for this.
-
-When the [Coordinator](../design/coordinator.md) decides that a 
[Historical](../design/historical.md) process should load or drop a segment, it 
writes an ephemeral znode to
-
-```
-${druid.zk.paths.loadQueuePath}/_host_of_historical_process/_segment_identifier
-```
-
-This znode will contain a payload that indicates to the Historical process 
what it should do with the given segment. When the Historical process is done 
with the work, it will delete the znode in order to signify to the Coordinator 
that it is complete.
diff --git a/docs/development/extensions-core/kubernetes.md 
b/docs/development/extensions-core/kubernetes.md
index 600c3ada21b..ac66cdda740 100644
--- a/docs/development/extensions-core/kubernetes.md
+++ b/docs/development/extensions-core/kubernetes.md
@@ -31,11 +31,10 @@ Apache Druid Extension to enable using Kubernetes API 
Server for node discovery
 
 To use this extension please make sure to  
[include](../../configuration/extensions.md#loading-extensions) 
`druid-kubernetes-extensions` in the extensions load list.
 
-This extension works together with HTTP based segment and task management in 
Druid. Consequently, following configurations must be set on all Druid nodes.
+This extension works together with HTTP-based segment and task management in 
Druid. Consequently, following configurations must be set on all Druid nodes.
 
 `druid.zk.service.enabled=false`
 `druid.serverview.type=http`
-`druid.coordinator.loadqueuepeon.type=http`
 `druid.indexer.runner.type=httpRemote`
 `druid.discovery.type=k8s`
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to