This is an automated email from the ASF dual-hosted git repository.
abhishekrb pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git
The following commit(s) were added to refs/heads/master by this push:
new d7dfbebf974 [Docs]: Fix typo and update broadcast rules section
(#16882)
d7dfbebf974 is described below
commit d7dfbebf974f3a5e9a4d68a3a78c6b22eea9cd33
Author: Abhishek Radhakrishnan <[email protected]>
AuthorDate: Mon Aug 12 13:55:33 2024 -0700
[Docs]: Fix typo and update broadcast rules section (#16882)
* Fix typo in waitUntilSegmentsLoad.
* Add a note on configuring druid.segmentCache.locations for broadcast
rules.
* Update docs/operations/rule-configuration.md
Co-authored-by: Victoria Lim <[email protected]>
---------
Co-authored-by: Victoria Lim <[email protected]>
---
docs/multi-stage-query/reference.md | 2 +-
docs/operations/rule-configuration.md | 4 +++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/docs/multi-stage-query/reference.md
b/docs/multi-stage-query/reference.md
index cf06156c658..0f8e710a59f 100644
--- a/docs/multi-stage-query/reference.md
+++ b/docs/multi-stage-query/reference.md
@@ -404,7 +404,7 @@ The following table lists the context parameters for the
MSQ task engine:
| `durableShuffleStorage` | SELECT, INSERT, REPLACE <br /><br />Whether to use
durable storage for shuffle mesh. To use this feature, configure the durable
storage at the server level using
`druid.msq.intermediate.storage.enable=true`). If these properties are not
configured, any query with the context variable `durableShuffleStorage=true`
fails with a configuration error. <br /><br /> | `false` |
| `faultTolerance` | SELECT, INSERT, REPLACE<br /><br /> Whether to turn on
fault tolerance mode or not. Failed workers are retried based on
[Limits](#limits). Cannot be used when `durableShuffleStorage` is explicitly
set to false. | `false` |
| `selectDestination` | SELECT<br /><br /> Controls where the final result of
the select query is written. <br />Use `taskReport`(the default) to write
select results to the task report. <b> This is not scalable since task reports
size explodes for large results </b> <br/>Use `durableStorage` to write results
to durable storage location. <b>For large results sets, its recommended to use
`durableStorage` </b>. To configure durable storage see
[`this`](#durable-storage) section. | `taskReport` |
-| `waitUntilSegmentsLoad` | INSERT, REPLACE<br /><br /> If set, the ingest
query waits for the generated segment to be loaded before exiting, else the
ingest query exits without waiting. The task and live reports contain the
information about the status of loading segments if this flag is set. This will
ensure that any future queries made after the ingestion exits will include
results from the ingestion. The drawback is that the controller task will stall
till the segments are loaded. | [...]
+| `waitUntilSegmentsLoad` | INSERT, REPLACE<br /><br /> If set, the ingest
query waits for the generated segments to be loaded before exiting, else the
ingest query exits without waiting. The task and live reports contain the
information about the status of loading segments if this flag is set. This will
ensure that any future queries made after the ingestion exits will include
results from the ingestion. The drawback is that the controller task will stall
till the segments are loaded. | [...]
| `includeSegmentSource` | SELECT, INSERT, REPLACE<br /><br /> Controls the
sources, which will be queried for results in addition to the segments present
on deep storage. Can be `NONE` or `REALTIME`. If this value is `NONE`, only
non-realtime (published and used) segments will be downloaded from deep
storage. If this value is `REALTIME`, results will also be included from
realtime tasks. `REALTIME` cannot be used while writing data into the same
datasource it is read from.| `NONE` |
| `rowsPerPage` | SELECT<br /><br />The number of rows per page to target. The
actual number of rows per page may be somewhat higher or lower than this
number. In most cases, use the default.<br /> This property comes into effect
only when `selectDestination` is set to `durableStorage` | 100000 |
| `skipTypeVerification` | INSERT or REPLACE<br /><br />During query
validation, Druid validates that [string arrays](../querying/arrays.md) and
[multi-value dimensions](../querying/multi-value-dimensions.md) are not mixed
in the same column. If you are intentionally migrating from one to the other,
use this context parameter to disable type validation.<br /><br />Provide the
column list as comma-separated values or as a JSON array in string form.| empty
list |
diff --git a/docs/operations/rule-configuration.md
b/docs/operations/rule-configuration.md
index 610c2fa6dbf..9117973f0b8 100644
--- a/docs/operations/rule-configuration.md
+++ b/docs/operations/rule-configuration.md
@@ -277,7 +277,9 @@ Set the following property:
## Broadcast rules
-Druid extensions use broadcast rules to load segment data onto all brokers in
the cluster. Apply broadcast rules in a test environment, not in production.
+Druid extensions use broadcast rules to load segment data onto all Brokers in
the cluster. Apply broadcast rules in a test environment, not in production.
+To use broadcast rules, ensure that `druid.segmentCache.locations` is
configured on both Brokers and Historicals.
+This ensures that Druid can load the segments onto those servers. For more
information, see [Segment cache
size](../operations/basic-cluster-tuning.md#segment-cache-size).
### Forever broadcast rule
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]