This is an automated email from the ASF dual-hosted git repository.
fjy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git
The following commit(s) were added to refs/heads/master by this push:
new 2c380e3 Fix doc for automatic compaction (#6749)
2c380e3 is described below
commit 2c380e3a261c8486d643d083235d40621db2f570
Author: Jihoon Son <[email protected]>
AuthorDate: Mon Dec 17 11:44:33 2018 -0800
Fix doc for automatic compaction (#6749)
---
docs/content/design/coordinator.md | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/docs/content/design/coordinator.md
b/docs/content/design/coordinator.md
index a858025..90f5e28 100644
--- a/docs/content/design/coordinator.md
+++ b/docs/content/design/coordinator.md
@@ -66,14 +66,14 @@ To ensure an even distribution of segments across
historical nodes in the cluste
### Compacting Segments
Each run, the Druid coordinator compacts small segments abutting each other.
This is useful when you have a lot of small
-segments which may degrade the query performance as well as increasing the
disk usage. Note that the data for an interval
-cannot be compacted across the segments.
+segments which may degrade the query performance as well as increasing the
disk space usage.
The coordinator first finds the segments to compact together based on the
[segment search policy](#segment-search-policy).
-Once it finds some segments, it launches a [compact
task](../ingestion/tasks.html#compaction-task) to compact those segments.
-The maximum number of running compact tasks is `max(sum of worker capacity *
slotRatio, maxSlots)`.
-Note that even though `max(sum of worker capacity * slotRatio, maxSlots)` = 1,
at least one compact task is always submitted
-once a compaction is configured for a dataSource. See [Compaction
Configuration API](../operations/api-reference.html#compaction-configuration)
to set those values.
+Once some segments are found, it launches a [compact
task](../ingestion/tasks.html#compaction-task) to compact those segments.
+The maximum number of running compact tasks is `min(sum of worker capacity *
slotRatio, maxSlots)`.
+Note that even though `min(sum of worker capacity * slotRatio, maxSlots)` = 0,
at least one compact task is always submitted
+if the compaction is enabled for a dataSource.
+See [Compaction Configuration
API](../operations/api-reference.html#compaction-configuration) and [Compaction
Configuration](../configuration/index.html#compaction-dynamic-configuration) to
enable the compaction.
Compact tasks might fail due to some reasons.
@@ -82,9 +82,6 @@ Compact tasks might fail due to some reasons.
Once a compact task fails, the coordinator simply finds the segments for the
interval of the failed task again, and launches a new compact task in the next
run.
-To use this feature, you need to set some configurations for dataSources you
want to compact.
-Please see [Compaction
Configuration](../configuration/index.html#compaction-dynamic-configuration)
for more details.
-
### Segment Search Policy
#### Newest Segment First Policy
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]