a2l007 commented on a change in pull request #11135:
URL: https://github.com/apache/druid/pull/11135#discussion_r617042227
##########
File path: docs/configuration/index.md
##########
@@ -822,6 +823,7 @@ Issuing a GET request at the same URL will return the spec
that is currently in
|`decommissioningMaxPercentOfMaxSegmentsToMove`| The maximum number of
segments that may be moved away from 'decommissioning' servers to
non-decommissioning (that is, active) servers during one Coordinator run. This
value is relative to the total maximum segment movements allowed during one run
which is determined by `maxSegmentsToMove`. If
`decommissioningMaxPercentOfMaxSegmentsToMove` is 0, segments will neither be
moved from _or to_ 'decommissioning' servers, effectively putting them in a
sort of "maintenance" mode that will not participate in balancing or assignment
by load rules. Decommissioning can also become stalled if there are no
available active servers to place the segments. By leveraging the maximum
percent of decommissioning segment movements, an operator can prevent active
servers from overload by prioritizing balancing, or decrease decommissioning
time instead. The value should be between 0 and 100.|70|
|`pauseCoordination`| Boolean flag for whether or not the coordinator should
execute its various duties of coordinating the cluster. Setting this to true
essentially pauses all coordination work while allowing the API to remain up.
Duties that are paused include all classes that implement the `CoordinatorDuty`
Interface. Such duties include: Segment balancing, Segment compaction, Emission
of metrics controlled by the dynamic coordinator config `emitBalancingStats`,
Submitting kill tasks for unused segments (if enabled), Logging of used
segments in the cluster, Marking of newly unused or overshadowed segments,
Matching and execution of load/drop rules for used segments, Unloading segments
that are no longer marked as used from Historical servers. An example of when
an admin may want to pause coordination would be if they are doing deep storage
maintenance on HDFS Name Nodes with downtime and don't want the coordinator to
be directing Historical Nodes to hit the Name Node with API req
uests until maintenance is done and the deep store is declared healthy for use
again. |false|
|`replicateAfterLoadTimeout`| Boolean flag for whether or not additional
replication is needed for segments that have failed to load due to the expiry
of `druid.coordinator.load.timeout`. If this is set to true, the coordinator
will attempt to replicate the failed segment on a different historical server.
This helps improve the segment availability if there are a few slow historicals
in the cluster. However, the slow historical may still load the segment later
and the coordinator may issue drop requests if the segment is
over-replicated.|false|
+|`maxNonPrimaryReplicantsToLoad`|This is the maximum number of non-primary
segment replicants to load per Coordination run. This number can be set to put
a hard upper limit on the number of replicants loaded. It is a tool that can
help prevent long delays in new data being available for query after events
that require many non-primary replicants to be loaded by the cluster; such as a
Historical node disconnecting from the cluster. The default value essentially
means there is no limit on the number of replicants loaded per coordination
cycle.|`Integer.MAX_VALUE`|
Review comment:
It would be useful if we add some info regarding what could be a good
starting value to set this to.
##########
File path:
server/src/main/java/org/apache/druid/server/coordinator/duty/RunRules.java
##########
@@ -128,6 +130,18 @@ public DruidCoordinatorRuntimeParams
run(DruidCoordinatorRuntimeParams params)
boolean foundMatchingRule = false;
for (Rule rule : rules) {
if (rule.appliesTo(segment, now)) {
+ if (
+ stats.getGlobalStat(
+ "totalNonPrimaryReplicantsLoaded") >=
paramsWithReplicationManager.getCoordinatorDynamicConfig()
+
.getMaxNonPrimaryReplicantsToLoad()
+ &&
!paramsWithReplicationManager.getReplicationManager().isLoadPrimaryReplicantsOnly()
+ ) {
+ log.info(
+ "Maximum number of non-primary replicants [%d] have been
loaded for the current RunRules execution. Only loading primary replicants from
here on.",
Review comment:
Since this behavior is valid only for the present coordinator run, the
log message might be clearer with something like "Only loading primary
replicants from here on for this coordinator run period"
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]