This is an automated email from the ASF dual-hosted git repository.

victoria pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new 1c76ebad3b Minor doc updates. (#14409)
1c76ebad3b is described below

commit 1c76ebad3b63069f8c3e5b422b945212badab196
Author: Abhishek Radhakrishnan <[email protected]>
AuthorDate: Mon Jun 12 15:24:48 2023 -0700

    Minor doc updates. (#14409)
    
    Co-authored-by: Victoria Lim <[email protected]>
---
 docs/configuration/index.md                        |  2 +-
 docs/operations/clean-metadata-store.md            |  6 +++---
 docs/operations/rule-configuration.md              |  8 ++++----
 extensions-contrib/opentelemetry-emitter/README.md | 18 +++++++++---------
 integration-tests/README.md                        |  2 +-
 5 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/docs/configuration/index.md b/docs/configuration/index.md
index b765b67c3e..37f237d756 100644
--- a/docs/configuration/index.md
+++ b/docs/configuration/index.md
@@ -855,7 +855,7 @@ These Coordinator static configurations can be defined in 
the `coordinator/runti
 |`druid.coordinator.kill.period`|How often to send kill tasks to the indexing 
service. Value must be greater than `druid.coordinator.period.indexingPeriod`. 
Only applies if kill is turned on.|P1D (1 Day)|
 |`druid.coordinator.kill.durationToRetain`|Only applies if you set 
`druid.coordinator.kill.on` to `true`. This value is ignored if 
`druid.coordinator.kill.ignoreDurationToRetain` is `true`. Valid configurations 
must be a ISO8601 period. Druid will not kill unused segments whose interval 
end date is beyond `now - durationToRetain`. `durationToRetain` can be a 
negative ISO8601 period, which would result in `now - durationToRetain` to be 
in the future.<br /><br />Note that the `durationToRe [...]
 |`druid.coordinator.kill.ignoreDurationToRetain`|A way to override 
`druid.coordinator.kill.durationToRetain` and tell the coordinator that you do 
not care about the end date of unused segment intervals when it comes to 
killing them. If true, the coordinator considers all unused segments as 
eligible to be killed.|false|
-|`druid.coordinator.kill.maxSegments`|Kill at most n unused segments per kill 
task submission, must be greater than 0. Only applies and MUST be specified if 
kill is turned on.|100|
+|`druid.coordinator.kill.maxSegments`|The number of unused segments to kill 
per kill task. This number must be greater than 0. This only applies when 
`druid.coordinator.kill.on=true`.|100|
 |`druid.coordinator.balancer.strategy`|Specify the type of balancing strategy 
for the coordinator to use to distribute segments among the historicals. 
`cachingCost` is logically equivalent to `cost` but is more CPU-efficient on 
large clusters. `diskNormalized` weights the costs according to the servers' 
disk usage ratios - there are known issues with this strategy distributing 
segments unevenly across the cluster. `random` distributes segments among 
services randomly.|`cost`|
 |`druid.coordinator.balancer.cachingCost.awaitInitialization`|Whether to wait 
for segment view initialization before creating the `cachingCost` balancing 
strategy. This property is enabled only when 
`druid.coordinator.balancer.strategy` is `cachingCost`. If set to 'true', the 
Coordinator will not start to assign segments, until the segment view is 
initialized. If set to 'false', the Coordinator will fallback to use the `cost` 
balancing strategy only if the segment view is not initialized [...]
 |`druid.coordinator.loadqueuepeon.repeatDelay`|The start and repeat delay for 
the loadqueuepeon, which manages the load and drop of segments.|PT0.050S (50 
ms)|
diff --git a/docs/operations/clean-metadata-store.md 
b/docs/operations/clean-metadata-store.md
index e81fa90eb2..8aa3c7dc32 100644
--- a/docs/operations/clean-metadata-store.md
+++ b/docs/operations/clean-metadata-store.md
@@ -163,7 +163,7 @@ For more detail, see [Task 
logging](../configuration/index.md#task-logging).
 Druid automatically cleans up metadata records, excluding compaction 
configuration records and indexer task logs.
 To disable automated metadata cleanup, set the following properties in the 
`coordinator/runtime.properties` file:
 
-```
+```properties
 # Keep unused segments
 druid.coordinator.kill.on=false
 
@@ -185,7 +185,7 @@ druid.coordinator.kill.datasource.on=false
 
 Consider a scenario where you have scripts to create and delete hundreds of 
datasources and related entities a day. You do not want to fill your metadata 
store with leftover records. The datasources and related entities tend to 
persist for only one or two days. Therefore, you want to run a cleanup job that 
identifies and removes leftover records that are at least four days old. The 
exception is for audit logs, which you need to retain for 30 days:
 
-```
+```properties
 ...
 # Schedule the metadata management store task for every hour:
 druid.coordinator.period.metadataStoreManagementPeriod=P1H
@@ -197,7 +197,7 @@ druid.coordinator.period.metadataStoreManagementPeriod=P1H
 # Required also for automated cleanup of rules and compaction configuration.
 
 druid.coordinator.kill.on=true
-druid.coordinator.kill.period=P1D 
+druid.coordinator.kill.period=P1D
 druid.coordinator.kill.durationToRetain=P4D
 druid.coordinator.kill.maxSegments=1000
 
diff --git a/docs/operations/rule-configuration.md 
b/docs/operations/rule-configuration.md
index 9719c877cc..0d75cf54e8 100644
--- a/docs/operations/rule-configuration.md
+++ b/docs/operations/rule-configuration.md
@@ -195,7 +195,7 @@ Forever drop rules have type `dropForever`:
 
 ```json
 {
-  "type": "dropForever",
+  "type": "dropForever"
 }
 ```
 
@@ -209,7 +209,7 @@ Period drop rules have type `dropByPeriod` and the 
following JSON structure:
 {
   "type": "dropByPeriod",
   "period": "P1M",
-  "includeFuture": true,
+  "includeFuture": true
 }
 ```
 
@@ -271,7 +271,7 @@ Forever broadcast rules have type `broadcastForever`:
 
 ```json
 {
-  "type": "broadcastForever",
+  "type": "broadcastForever"
 }
 ```
 
@@ -285,7 +285,7 @@ Period broadcast rules have type `broadcastByPeriod` and 
the following JSON stru
 {
   "type": "broadcastByPeriod",
   "period": "P1M",
-  "includeFuture": true,
+  "includeFuture": true
 }
 ```
 
diff --git a/extensions-contrib/opentelemetry-emitter/README.md 
b/extensions-contrib/opentelemetry-emitter/README.md
index f7298ea292..ce5639aafa 100644
--- a/extensions-contrib/opentelemetry-emitter/README.md
+++ b/extensions-contrib/opentelemetry-emitter/README.md
@@ -39,7 +39,7 @@ To enable the OpenTelemetry emitter, add the extension and 
enable the emitter in
 
 Load the plugin:
 
-```
+```properties
 druid.extensions.loadList=[..., "opentelemetry-emitter"]
 ```
 
@@ -47,13 +47,13 @@ Then there are 2 options:
 
 * You want to use only `opentelemetry-emitter`
 
-```
+```properties
 druid.emitter=opentelemetry
 ```
 
 * You want to use `opentelemetry-emitter` with other emitters
 
-```
+```properties
 druid.emitter=composing
 druid.emitter.composing.emitters=[..., "opentelemetry"]
 ```
@@ -66,7 +66,7 @@ _*More about Druid configuration 
[here](https://druid.apache.org/docs/latest/con
 
 Create `docker-compose.yaml` in your working dir:
 
-```
+```yaml
 version: "2"
 services:
 
@@ -86,7 +86,7 @@ services:
 
 Create `config.yaml` file with configuration for otel-collector:
 
-```
+```yaml
 version: "2"
 receivers:
 receivers:
@@ -116,7 +116,7 @@ service:
 
 Run otel-collector and zipkin.
 
-```
+```bash
 docker-compose up
 ```
 
@@ -124,7 +124,7 @@ docker-compose up
 
 Build Druid:
 
-```
+```bash
 mvn clean install -Pdist
 tar -C /tmp -xf distribution/target/apache-druid-0.21.0-bin.tar.gz
 cd /tmp/apache-druid-0.21.0
@@ -148,7 +148,7 @@ Load sample data - 
[example](https://druid.apache.org/docs/latest/tutorials/inde
 
 Create `query.json`:
 
-```
+```json
 {
    "query":"SELECT COUNT(*) as total FROM wiki WHERE countryName IS NOT NULL",
    "context":{
@@ -159,7 +159,7 @@ Create `query.json`:
 
 Send query:
 
-```
+```bash
 curl -XPOST -H'Content-Type: application/json' 
http://localhost:8888/druid/v2/sql/ -d @query.json
 ```
 
diff --git a/integration-tests/README.md b/integration-tests/README.md
index b9f4769941..54ff24d75e 100644
--- a/integration-tests/README.md
+++ b/integration-tests/README.md
@@ -315,7 +315,7 @@ To run tests on any druid cluster that is already running, 
create a configuratio
     }
 
 Set the environment variable `CONFIG_FILE` to the name of the configuration 
file:
-```
+```bash
 export CONFIG_FILE=<config file name>
 ```
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to